At a glance.
- Can OpenAI meet EU regulation requirements?
- CFPB's breach was an inside job.
Can OpenAI meet EU regulation requirements?
As we’ve previously discussed, European regulators have been cracking down on OpenAI, the developers behind the hugely popular AI-powered chatbot ChatGPT, in order to reign in the technology’s use of public data. Italian lawmakers fully banned the platform in the country, and OpenAI has until April 30 – a mere matter of days – to comply with European data protection laws before potentially being hit with fines and being forced to delete data, or possibly being banned in Italy entirely. However, experts say OpenAI’s training model, which relies on scraping the internet for content, will make it nearly impossible for the company to meet the EU’s requirements. OpenAI has been steadily advancing its learning models, and the latest, GPT-4, likely requires the largest set of training data yet. As Western data protection authorities allege, much of that data includes people’s personal data, and it’s being used without their consent. In order to comply with EU regulations, OpenAI would either have to ask for consent or prove that it has a “legitimate interest” in collecting the data in question. As well, OpenAI will be required to be more transparent with users about how ChatGPT uses their data, and give individuals the power to correct or delete any data they don’t want the platform to use. Making matters even more difficult for OpenAI, separating Italy’s data from data belonging to other countries would be an extremely daunting task, one that could force the company to completely revamp the tech. While OpenAI argues that, in the US, data publicly available on the internet cannot be deemed “private,” the EU’s General Data Protection Regulation gives more rights to users. Lilian Edwards, an internet law professor at Newcastle University, explains to MIT Technology Review, “The US has a doctrine that when stuff is in public, it's no longer private, which is not at all how European law works.” She predicts that OpenAI’s case will land in the Court of Justice of the European Union, the EU’s highest court. In addition to Italy’s ban, French, German, Irish, and Canadian data regulators have also launched investigations into OpenAI’s data gathering processes.
CFPB's breach was an inside job.
The Wall Street Journal reports that a former employee of the US’ Consumer Financial Protection Bureau (CFPB) leaked the confidential data of 256,000 consumers and forty-five financial institutions. A CFPB spokesperson has disclosed that the staffer forwarded the info to a personal email account. Although there is no evidence that the data went beyond the account in question, the transfer of the records constitutes a major data breach. The CFPB hasn’t publicly disclosed the names of the impacted firms or the employee at fault, and lawmakers are urging the bureau to share more information about the incident. Representative Patrick McHenry, a Republican out of North Carolina and chairman of the House Financial Services Committee, stated, “This breach raises concerns with how the CFPB safeguards consumers’ personally identifiable information.” In a letter sent to CFPB Director Rohit Chopra, Representative Bill Huizenga, a Republican from Michigan and chair of the Financial Services Committee’s investigations panel, has requested a briefing no later than April 25 on the “mitigation and remediation efforts, the scale of the breach, as well as efforts made to give the appropriate notifications. Politico notes that the CFPB has referred the incident to the inspector general. According to spokesperson Sam Gilford, “The CFPB takes data privacy very seriously, and this unauthorized transfer of personal and confidential data is completely unacceptable. All CFPB employees are trained in their obligations under bureau regulations and Federal law to safeguard confidential or personal information.”
Darren James, Senior Product Manager with Specops Software sees the case as one of clumsy handling of data. Whether the intent was malign or not, the effects were the same, and this kind of incident could be avoided with sound data handling practices. “Unfortunately, this is an example of clumsy handling of sensitive data," Jones said. "Even if there was no ill intent by the individual concerned there are still huge risks to data privacy whether the email was encrypted, who else has access to that email account, and whether there’s a strong password or MFA enabled on the personal email account. It’s unclear if the Consumer Federal Protection Bureau has done any subsequent threat intelligence analysis to see if this data has appeared anywhere else, we can only hope that this is the case. The CFPB has a lesson to learn here in responsible data handling. Any training done has failed and more emphasis should be made on Cyber Aware Training in the future to prevent poor security hygiene like this instance.”
Chris Hauk, consumer privacy champion at Pixel Privacy, notes that, while the person responsible may be gone, it's important to lock the access door behind the former employee. “It's a relief to see that apparently this breach has been contained and that the individual that misused the customer info is now gone. Hopefully the CFPB canceled all of that employee's access to their systems. Hopefully, CFPB will require additional training for its employees about proper data handling. Well, except for the employee 'who no longer works at the CFPB'.”
Paul Bischoff, privacy advocate with Comparitech, duly notes the irony. “Although it is embarrassingly ironic for the CFPB to endanger consumer's information, the breach was contained and no one's information appears to be at risk. I imagine CFPB staff will be attending a lot of meetings soon about how to properly handle data and workplace policy.”
Joe Payne, CEO of Code42, thinks there should be an explanation of how the CFPB managed risk. “The recent news of a CFPB employee forwarding the personal information of more than a quarter-million consumers to a personal email address has resulted in a major data breach and calls CFPB’s consumer and internal risk safeguarding principles into question. This unauthorized transfer, intentional or not, illuminates the importance of securing collaborative cultures to stop data loss. Sadly, these incidents remain an increasingly pervasive problem - with Code42’s research revealing a year-over-year increase of 32% in the number of Insider Risk incidents. We see this happen time and time again–sensitive data loss due to human error. This should be a wake-up call for companies to invest in Insider Risk Management technologies and programs that drive down the risks associated with human behavior, and can block data exfiltration in an easy to administer way."
(Added, 10:15 PM ET, April 23rd, 2023. Kris Lahiri, Co-Founder and Chief Security Officer at Egnyte, draws an object lesson in the importance of safeguarding sensitive data.“The recent data breach at the Consumer Financial Protection Bureau reinforces the importance of safeguarding sensitive data, particularly Personally Identifiable Information (PII)," Lahn writes. "While the agency states there is no evidence that the data was shared beyond the former employee’s personal email, this type of information is a treasure trove for threat actors. Therefore, it’s critical that organizations limit access to sensitive data on a need-to-know basis. They should also have systems and technologies in place to detect data misuse/anomalous user behavior, including potential insider threats.”)