At a glance.
- Another bad week for ChatGPT.
- T-Mobile asks for forgiveness.
Another bad week for ChatGPT.
Concerns about the security of ChatGPT have been at the forefront of news in recent months, and now OpenAI, the company behind the AI-powered chatbot, has confirmed it suffered a data breach. Security Intelligence reports that the leak, which led to a temporary shutdown of the platform, was the result of a vulnerability in the code’s open-source library that allowed users to view other users’ chat histories. In a press release about the incident, OpenAI stated, “It was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number and credit card expiration date. Full credit card numbers were not exposed at any time.” The breach reportedly only impacted 1% of users and was swiftly remedied, but it no doubt reinforces critics’ views that the platform is a security risk. As we discussed last month, employees at electronics giant Samsung were caught sharing sensitive company data with ChatGPT. In response, Bloomberg reports, Samsung Electronics is banning employee use of the chatbot on company-owned devices and its internal networks. The company announced the ban to staff on Monday, noting that data transmitted to ChatGPT, along with other artificial intelligence platforms like Google Bard and Bing, is difficult to retrieve or delete and could end up being viewed by other users. Samsung told staff, “Interest in generative AI platforms such as ChatGPT has been growing internally and externally. While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.” To give employees an alternative, the company is developing its own internal AI tools to support software development in a secure environment.
Roy Akerman, Co-Founder & CEO, Rezonate, commented on the growing risk large-language models present. “The wide adoption of AI language models is becoming widely accepted as a means of accelerating delivery of code creation and analysis. Yet, data leakage is most often a by-product of that speed, efficiency, and quality. Developers worldwide are anxious to use these technologies, yet guidance from engineering management has yet to be put in place on the do’s and don’ts of AI usage to ensure data privacy is respected and maintained," he wrote. “The aspect of AI consuming all input as source material for others queries presents a black box of uncertainty as to exactly how and where a company’s data would end up and completely upends the tight data security at the heart of most all companies today.“
He doesn't, however, see simple restriction as an approach that's likely to control the risk. "Blanket restrictions are not a permanent solution and will only limit an organization’s visibility to this problem. Instead, increased control, with education of developers on the cause and effect of using these tools for code reviews, code optimization, debugging and syntax will help harness the technology for the betterment of the organization.”
T-Mobile asks for forgiveness.
In March T-Mobile confirmed it had experienced its second data breach of the year, and now the telecom giant has released an apology to customers impacted by the incident. The breach in question resulted in the theft of personal details, account information and PIN numbers belonging to over eight hundred individuals, which pales in comparison to the 37 million T-Mobile accounts exposed in January’s third-party breach. However, as SC Media notes, a second breach so soon isn’t great for the reputation of the company, which has suffered nine breaches since 2018. In the mea culpa, T-Mobile stated, “While we have a number of safeguards in place to prevent unauthorized access such as this from happening, we recognize that we must continue to make improvements to stay ahead of bad actors.”