At a glance.
- The ICO’s updated guidance on workplace monitoring.
- Hackers say they’ve dumped data stolen from St. Louis’ Metro Transit.
- New app code of practice from UK officials.
- Data broker supplies user data to the US government.
- More on the Delete Act.
- Could AI chatbots be a little too smart?
- The pros and cons of using AI to defend military robots.
- Plastic surgery clinics targeted by data thieves.
- How law firms will be affected by Australia’s new cybersecurity plan.
The ICO’s updated guidance on workplace monitoring.
The UK’s Information Commissioner’s Office (ICO) has issued new guidance on workplace monitoring. Previous guidance was incorporated into the ICO’s Employment Practices Code in 2011, but with the increase in telework and myriad innovations in technology in recent years, the guidance was overdue for a refresh. As cyber/data/privacy insights explains, all companies subject to UK data protection law must comply, which includes non-UK companies that are either established in the UK or offer goods or services to British residents.
The guidance is vast in scope, as it covers not just the monitoring of employees, but anyone considered a “worker” (which includes consultants and contractors), and it applies to both systematic monitoring and occasional monitoring like security cameras. If info obtained from surveillance will lead to any automated legal decisions about an employee’s work or compensation, the company must be transparent about the process and allow the worker to challenge the decision or request human intervention. If the monitoring could pose a high risk to the worker’s rights, companies are required to conduct a Data Protection Impact Assessment.
For biometric data in particular, the company must have security measures in place to ensure the data is adequately protected. If monitoring could result in the processing of special category data (for example, trade union status or health info), an Article 9 permitted purpose and an Article 6 legal basis will be required. Workers have the right to object monitoring, but the company can override the objection if it can demonstrate compelling legitimate interests.
Hackers say they’ve dumped data stolen from St. Louis’ Metro Transit.
An anonymous hacker group is saying it’s behind a recent cyberattack on St. Louis’ Metro Transit, a regional transportation agency that serves the US state of Missouri and Illinois, and that it has published data stolen in the attack. Last week the cybercriminals demanded a ransom in exchange for keeping the stolen info under wraps. Taulby Roach, CEO and president Metro Transit’s operator Bi-State Development, says the company refused to meet the hackers’ demands as recommended by cybersecurity experts. Although it has not yet been confirmed, the booty allegedly includes passports, Social Security numbers, and tax information belonging to agency employees.
As GovTech reports, Emsisoft cybersecurity analyst Brett Callow shared screenshots of the allegedly stolen data which was posted on dark web, but has not verified the data because doing so would compromise the privacy of the impacted employees. That said, the screenshots depict ten files, each containing 500 megabytes of data, and a tracker indicates the data has been viewed over seven hundred times. Roach says fortunately no customer data were compromised, and no employees have yet reported any signs of malicious activity related to the breach.
New app code of practice from UK officials.
The UK’s Department of Science, Innovation, and Technology has issued a code of practice for app stores and app developers regarding privacy and security. Composed of eight key principles, the code is considered voluntary. However, following the recommendations would be wise, as some of the principles within the code are mandated through existing legislation like the Data Protection Act 2018 and UK General Data Protection Regulation, and other principles are intended to help companies take steps toward compliance with legislation.
While app store operators, app developers, and platform developers are all responsible for implementation of the principles, store operators are expected to take steps to ensure that developers are adhering to the code. The code advises app store operators to make sure that only apps that meet the code’s baseline requirements are allowed on the app store, offer security and privacy guidance to developers, and provide clear feedback to developers about why apps don’t pass muster.
The principles call for both store operators and app developers to establish a vulnerability disclosure process, keep apps up to date to protect users, and make essential security and privacy information easily accessible to users. In two supplements to the code, the Information Commissioner’s Office (ICO) highlights legal obligations from UK data protection law relevant to the Code of Practice, and offers an overview of how stakeholders can make a referral to the ICO if they find apps with security or privacy issues. Stakeholders have until March 2024 for implementation. The code reads, “We intend to use this extended period to increase our engagement with Developers and Operators and expand our monitoring and evaluation activities to support our policy next steps.”
Data broker supplies user data to the US government.
That seemingly innocuous online ad could be giving Big Brother access to users’ private data. The Wall Street Journal offers an in-depth look at how US intelligence has allegedly been using user data supplied by data brokers – in particular a broker called Near Intelligence – to conduct surveillance. The process is fairly straightforward: Ad-supported phone apps, like family safety platform Life 360, collect data about their users, which is then shared with advertisers bidding for ad space. Data brokers like Near Intelligence are part of these advertising exchanges, and they repackage the data for sale to their own customers.
Some of those customers include government contractors, as well as other pass-through entities like Bazze, Aelius, and nContext. The contractors then pass the data on to US government agencies, where it’s used for cybersecurity, counterterrorism, counterintelligence, and public safety. Sources say Near obtained data from a number of advertising exchanges and claimed to have collected data from more than a billion user devices. Several of the ad exchanges involved have said Near’s actions violated their terms of service and have subsequently severed ties with the broker.
Near’s privacy, legal, and compliance specialists say they warned the company’s top brass that the activities weren’t authorized. For instance, Near’s general counsel and chief privacy officer Jay Angelo wrote to CEO Anil Mathews, “We sell geolocation data for which we do not have consent to do so…we sell/share device ID data for which we do not have consent to do so [and] we sell data outside the EU for which we do not have consent to do so.”
While Near has not directly commented on the WSJ’s findings, the company released a statement saying, “We are continuously improving our systems for preventing misuse of our data by customers.” Last week the company told the Securities and Exchange Commission that Mathews and several other executives had been placed on administrative leave while an investigation is underway regarding allegations of illicit financial activities, but it’s unclear whether that investigation is linked to the data sharing issues.
More on the Delete Act.
Last week the US state of California passed Senate Bill 362, commonly known as the Delete Act, which will allow the state’s consumers to ask all data brokers to delete their personal information with a single request. As cyber/data/privacy insights explains, the law calls for the California Privacy Protection Agency (CPPA) to create the data deletion mechanism that will support those requests by January 2026. Starting in August of that year, data brokers will be required to access the CPPA’s online deletion system every forty-five days to review and process new requests.
Beginning in 2028, the Delete Act will require data brokers to undergo an independent audit once every three years to verify they are in compliance. There are currently over five hundred data brokers registered in California, and in order for the CPPA to keep tabs on them, the Delete Act requires data brokers to register annually. Registration will include reporting metrics on the number of CCPA consumer requests and Delete Act deletion requests that they received, complied with, or denied in the past year. And the Delete Act comes with steep penalties: brokers who fail to register with the CPPA are subject to penalties including an administrative fine of $200 per day, and the same fine will be applied for failure to uphold a deletion request.
(Added, 10:45 PM ET, October 25th, 2023.) Zach Capers, Senior Security Analyst and Manager of ResearchLab at GetApp, observed that data brokers aren't the only avid consumers of information. “California’s Delete Act has been signed into law and will make it easier for consumers to remove their information from data broker websites. But data brokers are only part of a constantly-growing ecosystem of data hungry websites and technologies that aggressively collect consumer data," Capers said. "Consumers are clearly worried about sharing their personal information online and with emerging tools such as ChatGPT. GetApp's research finds that 85% of consumers are concerned about sharing personal information with generative AI tools while 49% say they’ve decided against using an AI tool because they didn’t trust it with their personal information. Similarly, 81% of consumers voice concerns about sharing personal data with web applications such as search engines and navigation apps."
Could AI chatbots be a little too smart?
A new study shows that artificial intelligence chatbots can intuit very sensitive information about the users with whom they converse. After testing language models developed by AI leaders OpenAI, Google, Meta, and Anthropic, the researchers at ETH Zurich in Switzerland found that from even the most boring of conversations, chatbots were able to determine personal details like the race, location, or occupation of the user. As Wired describes, chatbots were able to gather clues not just from the content of the user’s statements, but even subtle indicators like grammar or jargon.
According to Martin Vechev, a computer science professor at ETH Zurich, in the wrong hands this mind-reading power could easily be used for evil. Cybercriminals could use a chatbot to harvest sensitive data from unsuspecting users, and advertisers could use info gathered from chatbots to create detailed user profiles for targeted ads. What’s worse is that this ability is a result of the fundamental methods used to train chatbot models, meaning it would be very difficult to prevent it. Vechev says, “It's not even clear how you fix this problem. This is very, very problematic.” ETH reached out to the companies in the study to warn them about the issue, and while Google and Meta did not respond, OpenAI spokesperson Niko Felix said the company attempts to remove personal details from training data, and that individuals can request to have their data removed. “We want our models to learn about the world, not private individuals,” he stated. Anthropic’s privacy policy states that it does not harvest or “sell” personal information. Florian Tramèr, an ETH Zurich assistant professor who saw details of the report, stated, “This certainly raises questions about how much information about ourselves we're inadvertently leaking in situations where we might expect anonymity.”
The pros and cons of using AI to defend military robots.
On the flipside, researchers are also demonstrating how the powers of artificial intelligence can be harnessed for good. The University of South Australia (UniSA) reports that researchers have created an AI algorithm that can allow an unmanned military robot to detect if it’s being targeted by a cyberattack, and terminate that attack in a matter of seconds. As Interesting Engineering explains, the researchers collaborated with the US Army Futures Command to simulate a man-in-the-middle cyberattack on a GVT-BOT ground vehicle.
By using deep learning neural networks that can imitate the workings of the human brain, artificial intelligence experts from UniSA and Charles Sturt University were able to train the robot to recognize when hackers might be attempting to intercept its communications. And with a 99% success rate, the algorithm is better at detecting MitM attacks than any other method out there. UniSA autonomous systems researcher Professor Anthony Finn explains that in order for them to work collaboratively, robot operating systems (ROS’s) must be highly networked, leaving them vulnerable to breaches, hijacking, denial-of-service, and other attacks. “The good news, however, is that the speed of computing doubles every couple of years, and it is now possible to develop and implement sophisticated AI algorithms to guard systems against digital attacks,” Finn states.
Security Week notes that the researchers are considering applying the algorithm to other robotic platforms like unmanned aerial vehicles. The researchers state, “Under the umbrella of deep learning (supervised and unsupervised) systems, we are also keen to study the relative merits of our CNN intrusion detection algorithm with respect to similar detection techniques such as using evolving type-2 fuzzy systems, that can accommodate the footprint-of-uncertainties.”
Some experts warn that there are drawbacks to using AI for such applications. Ted Miracco, CEO at Approov Mobile Security wrote in emailed comments, “Using AI to address security concerns in military robots raises significant concerns and warrants critical examination. While the development of an algorithm to detect and intercept man-in-the-middle (MitM) attacks is a commendable effort, relying on AI for such critical tasks may not be the most responsible approach. A 99% success rate in preventing attacks may initially sound impressive, but when it comes to matters of national security and potential harm caused by compromised military robots, even a 1% failure rate is unacceptable if you are on the receiving end of the attack. MitM attacks can have severe consequences, including the potential for loss of life and significant damage and AI algorithms are probabilistic by nature, making them inherently fallible. There is always a risk of false positives or the much more disconcerting false negatives, where attacks go undetected. In the context of military operations, these errors can lead to disastrous outcomes.”
Miracco recommends pursuing deterministic solutions. “To ensure the security and integrity of military robots, deterministic solutions that provide 100% accuracy should be prioritized. While AI can play a role in augmenting security measures, it should be used as a supportive tool rather than the primary line of defense. Incorporating reliable, deterministic protocols and encryption techniques that leave no room for ambiguity or uncertainty should be the foundation of any security framework for military robots. It is imperative to prioritize deterministic solutions that eliminate any margin for error and take a comprehensive approach to security to ensure the safety and effectiveness of unmanned military systems.”
Plastic surgery clinics targeted by data thieves.
The US FBI has warned that plastic surgery practices and their patients are being targeted by cybercriminals. The attack proceeds in three phases: Data Harvesting, Data Enhancement (during which they use a range of open-source information to augment the data they've stolen), and, finally, Extortion.
Erich Kron, Security Awareness Advocate at KnowBe4, commented on what he takes to be an unusually vicious sort of cybercrime. "This is particularly nasty, especially since plastic surgery tends to be a very personal type of procedure. Whether it's simply cosmetic for the sake of appearance, or more functional due to recovery from a significant illness or accident, the threat to expose the procedural information, especially photos, could cause serious embarrassment for the patients. Unfortunately, this sensitive and potentially embarrassing information is required for this kind of treatment," he said. The criminals, for whatever Robin-Hood postures they may cop, are indifferent to their victims. "The cyber criminals engaged in this sort of extortion activity know what they are doing and yet do not care at all about the impact this has on the individuals. That makes them a particularly disgusting group of individuals."
And the incident shows the importance of data protection, Kron concludes. "For clinics or facilities that perform this type of procedure, it's very important for them to protect this information for the sake of the patients and to protect themselves from significant legal repercussions that might occur when patient information is leaked, especially in a malicious fashion. These facilities need to ensure that their employees are trained to spot and report social engineering attacks, including email phishing, text message attacks, and even potentially phone calls that are designed to gain network access. These facilities should also ensure they have strong levels of DLP (Data Loss Prevention) controls in place to avoid becoming a victim themselves."
How law firms will be affected by Australia’s new cybersecurity plan.
Australia’s Cyber Security Strategy 2023-2030 is centered around six “cyber shields” intended to protect citizens and businesses from cyberthreats. The shields – which include improved digital product safety, increased citizen education, and a public-private threat-sharing and blocking system – have been designed to equip the country with a multi-layered defense system.
However, Neal Costello, a national account manager at Excite Cyber who focuses on securing law firms, says the strategy could have unintended negative consequences for lawyers who handle sensitive data. Costello told Lawyers Weekly, “While the government’s intentions are commendable from a national security perspective, one in two legal firms already lack confidence in their ability to detect and respond to threats. Increased regulation could make it even more difficult to run an effective internal security function; we expect to see that level of confidence fall even further.”
He goes on to say that law firms are already facing increased pressure to demonstrate compliance with industry standards, a goal made even more difficult by firms’ reliance on third-party service providers. He adds, “For those firms that were already struggling to stay ‘up to date’ with their compliance requirements, these additional measures are likely to be expensive and even cause confusion as internal IT teams struggle to catch up.”
The threat intelligence-sharing aspect of the strategy could be especially problematic for law firms, as it could put client data at risk of abuse. Costello states, “Security teams are going to be sorely pressed to resolve the conflict between the ideology of data sharing and the need to protect the environment from potential data breaches or misuse of their data. Many industry sectors are establishing their own intelligence-sharing systems to protect themselves within a targeted environment, whilst still sharing information with government.” To prepare for implementation of the strategy, he also recommends firms keep their boards informed of all cyber-related activities while seeking assistance from outside experts. “In many cases, the answer is to engage external expertise to allow internal staff to focus on essential support and assistance to staff, but in many cases, the missing aspect is board-level awareness and willingness to adapt to an increasingly hostile technological reality.”