At a glance.
- ICO says Snap’s chatbot is too chatty with minors.
- Jawboning 101.
- China loosens data flow provisions.
- California passes Delete Act.
- Is Caesars’ bluffing about details of customer data breach?
- FTX trial witnesses implicate founder in customer fraud.
- Representative Santos charged with online crimes.
ICO says Snap’s chatbot is too chatty with minors.
Snap, parent company of popular instant messaging app Snapchat, is facing scrutiny from the UK Information Commissioner’s Office (ICO) for the app’s artificial intelligence chatbot My AI. On Friday the privacy watchdog announced a preliminary enforcement notice for Snap’s “potential failure to properly assess the privacy risks” presented by the generative AI chatbot. Billed as a virtual friend, My AI is pinned to the top of users’ feeds, available to provide answers to user questions or even send and receive snaps. The ICO has conducted a preliminary investigation, and while no breach has been discovered, the regulator says Snap may not have taken the necessary steps to make sure the product was compliant with the data protection rules laid out in the Children’s Design Code before the chatbot was launched in the UK last April.
Although Snap says My AI is equipped with safeguards that take user age into consideration and prevent the bot from giving offensive responses, there have been reports of the chatbot sharing inappropriate content. (For instance, the My AI allegedly offered minors tips on drinking alcohol without getting caught and losing their virginity.) In the enforcement notice, the regulator stated, “The ICO’s investigation provisionally found the risk assessment Snap conducted before it launched ‘My AI’ did not adequately assess the data protection risks posed by the generative AI technology, particularly to children.
The assessment of data protection risk is particularly important in this context which involves the use of innovative technology and the processing of personal data of 13 to 17 year old children.” Before the ICO makes a final decision, Snap will have the opportunity to respond to the concerns raised. A Snap spokesperson told TechCrunch, “We are closely reviewing the ICO’s provisional decision. Like the ICO we are committed to protecting the privacy of our users. In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available. We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”
China loosens data flow provisions.
The Cyberspace Administration of China (CAC) last month released a draft version of its Provisions on Regulating and Promoting Cross-Border Data Flows. Although the document is subject to change pending public comment, cyber/data/privacy insights explains that the document indicates China is easing up on its restrictions on international data transfers. For instance, transfers of personal employee data necessary for human resources management will no longer be required to adopt a transfer mechanism. As well, the CAC is raising the threshold for what data requires an official security assessment, freeing many transfers CAC’s approval in future. Other exemptions include personal information not originating in/from China that is stored on Chinese servers, and organizations that transfer less than ten thousand individuals’ personal data within a year. For organizations transferring info of more than ten thousand but fewer than one million individuals, no CAC security assessment is required, but they’ll need to comply with a standard contract released by the CAC or a certification issued by a qualified certification agency. Additionally, the new provisions also authorize China’s Free Trade Zones, or FTZs, to publish their own “negative lists” within respective FTZs that will be exempt from official transfer mechanism rules. The draft provisions will be open for public comment until October 15, and it’s possible that a final version could be released by November 30.
California passes Delete Act.
Gavin Newsom, governor of the US state of California, has signed a new law that will make it easier for state residents to have their data deleted from data brokerage databases, the LA Times reports. While residents can currently ask individual brokers to remove their data, each broker requires an individual requisition, meaning consumers are faced with the nearly impossible task of determining everywhere their data might be stored, and even then brokers have the right to deny a request.
Under Senate Bill 362, commonly called the Delete Act, the California Privacy Protection Agency is called to create a new tool by January 2026 that will allow Californians to ask all data brokers to delete their personal information with a single request. Democrat Senator Josh Becker, Author of the new bill, explains, “Data brokers possess thousands of data points on each and every one of us, and they currently sell reproductive healthcare, geolocation, and purchasing data to the highest bidder. The DELETE Act protects our most sensitive information.”
The bill further demonstrates how California is positioning itself as a leader when it comes to consumer privacy, but unsurprisingly, companies that rely on personal data to say the bill could have a disastrous impact on the economy. Among the organizations opposed to the new law is the Consumer Data Industry Association, which represents consumer reporting agencies. Justin Hakes, vice president of communications and public affairs for the Consumer Data Industry Association, stated, “SB 362 could have unintended consequences for all Californians: undermining consumer fraud protections, hurting small businesses’ ability to compete, and solidifying the big platforms’ data dominance.”
Maurice Uenuma, cybersecurity strategist and Blancco’s VP & GM, Americas, sees both the law’s upside–consumer protection–and the downside–the challenge of enforcing it. “From a privacy standpoint, this is a wonderful, consumer-friendly concept; it addresses many of the pain points for privacy-conscious citizens to limit their data exposure,” he wrote. “However, this will be very difficult to implement and enforce. One of the biggest challenges with data deletion requests is the ability to verify and prove that the data is truly gone. That, in turn, depends on the capacity to:
- “Find/collect all of the consumer’s data
- “Permanently and verifiably delete that data
Uenuma adds, “The right mix of technology solutions will help with implementation. Ultimately, implementation will require substantial organizational, procedural, and technological changes. From the consumers’ standpoint, if they want to verify that their personal data has been permanently eliminated then they should request a certified 'proof of erasure' at the end of the delete process."
Eduardo Azanza, CEO at Veridas, sees the Delete Act as the harbinger of a legislative trend. "California’s recent move to enhance user privacy is a trend we can expect to see spreading across many states in the U.S. There is undoubtedly a need for users to have a quick and easy way to scrub their data from the internet quickly, and not feel like their data is trapped in a vault," he wrote in emailed comments. "An individual’s online data is as integral to our physical characteristics. Today, someone having access to your data is just as sensitive as someone having access to your DNA and people must have the means to swiftly erase what is available to others, with just one click. With the internet constantly expanding, we will see similar legislation set in motion for the array of data that is available online, such as biometric data, PPI data, browser and social media data, financial data and more. Ensuring individuals can control their own accessible information is important to maintaining their digital autonomy.”
Jawboning 101.
Ahead of a workshop on the topic, the Knight Institute offers a first-hand account of “jawboning,” an insider term for government attempts to coerce tech companies into changing their content moderation policies. Two former members of Facebook’s public policy team (from its pre-Meta era) describe how US government officials used the informal persuasion tactic to pressure employees into changing company policies. “A government official can’t get what they want by passing a law or implementing a rule, so they lean on someone they know—and point and yell or give a serious stare—and threaten retribution by some other means,” the Facebook staffers write. They go on to explain how jawboning could be carried out both explicitly – like a federal official publicly asking for a particular policy during a meeting – and implicitly, perhaps by punishing the company if the official’s wishes weren’t honored. For instance, the account states, Facebook's shift in stance when it came to upholding first amendment rights after the 2016 election was due in part to government pressure. The Facebook staffers also recount how a senator attempted to push tech companies to ban the use of the Custom Audiences advertising tool as a way for political campaigns to circumvent the Honest Ads Act. The writers go on to offer several recommendations for how companies can avoid the pressures of jawboning: sharing data to help the government better understand the impact of the companies’ policies, establishing channels for official government input, and diversifying the internal decision-making process so that no one department determines outcomes. The writers conclude, “While we suggest that accounting for the impact of government power might result in prohibiting some communication that is persuasive but not coercive, we also recommend implementing transparency and oversight systems.”
Ave Caesar: videte et vocate.
See 'em and call; no bluffing. Caesars Entertainment, the American hotel and casino giant behind more than fifty properties, has confirmed that customer data was exposed as the result of a social engineering attack. As Cybersecurity Dive explains, the scammers did not target Caesars directly, but instead the company’s outsourced IT support vendor, leading to the hackers gaining access to Caesars’ systems on August 18 and the data breach on August 23. Caesars detected the intrusion on September 7, and while the total number of compromised customers has not been disclosed, the filing with the Maine attorney general’s office indicates that over 40,000 Maine residents were impacted. The exposed individuals were members of Caesars’ customer loyalty program, and exposed data include Social Security numbers and drivers license numbers. A report from Bloomberg indicates that the Scattered Spider hacking gang, working in conjunction with the AlphV/BlackCat ransomware group is behind the attack. Interestingly, although Caesars reportedly paid Scattered Spider millions not to release the stolen data, there is no mention of ransomware in the official filing. Furthermore, some are questioning the timeline of the attack, noting there’s some confusion about when exactly Caesars became aware of the intrusion. It’s worth noting that fellow entertainment company MGM Resorts was also hit by the same scam, and several customers have already filed class action lawsuits against both companies for the attacks. Lege et lacrimae.
FTX trial witnesses implicate founder in customer fraud.
A former employee at FTX, the fraud-ridden cryptocurrency exchange that collapsed last year, has testified that she was forced by company founder Sam Bankman-Fried to carry out criminal acts to defraud the company’s customers out of their money. Caroline Ellison, who served as Bankman-Fried’s top deputy, appeared as a prosecution witness on Tuesday during Bankman-Fried’s criminal trial, stating, “He directed me to commit these crimes.” Ellison says Bankman-Fried was aware that his crypto hedge fund, Alameda Research, was in financial trouble and defrauded FTX customers out of billions of dollars in an attempt to keep Alameda afloat.
The Wall Street Journal adds that she also testified that during the time they were engaging in these illegal activities they were romantically involved. She claims Bankman-Fried even promoted her to co-chief executive of Alameda in an attempt to distance himself from the company, though he was indeed still in charge. “He was the person I officially reported to,” Ellison stated. “He owned the company. And he was the one who set my compensation and had the ability to fire me.” Ellison’s testimony is part of a guilty plea that could result in her receiving leniency when she is sentenced for her crimes. Former FTX Chief Technology Officer Gary Wang also appeared as a prosecution witness, testifying that Bankman-Fried intentionally lied to customers in order to convince them FTX was stable. Bankman-Fried, who has pleaded not guilty, admits that FTX was poorly managed but claims he did not knowingly defraud customers of their funds and acted in good faith. The precedent-setting trial, which began last week, is expected to continue into next month.
Representative Santos charged with online crimes.
Representative George Santos (Republican representing New York's 3rd District) already facing Federal fraud charges, has been named in a superceding inditctment that carries additional charges related to alleged online crimes, specifically "one count of conspiracy to commit offenses against the United States, two counts of wire fraud, two counts of making materially false statements to the Federal Election Commission (FEC), two counts of falsifying records submitted to obstruct the FEC, two counts of aggravated identity theft, and one count of access device fraud, in addition to the seven counts of wire fraud, three counts of money laundering, one count of theft of public funds, and two counts of making materially false statements."