Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,400 words, this briefing is about a 7-minute read.
At a glance.
- Newsom vetoes one AI bill while approving a second one.
- Google offers more changes to avoid an EU fine.
Newsom vetoes restrictive AI bill, while approving another.
The news.
On Monday, California Governor Gavin Newsom vetoed AB 1064, which would have restricted children’s access to artificial intelligence (AI) chatbots. This bill would have barred AI developers from making their products available to children unless developers could ensure that their models would not engage in discussions related to self-harm, suicide, eating disorders, and other similar sensitive topics.
When vetoing the bill, Governor Newsom stated:
“While I strongly support the author’s goal of establishing necessary safeguards for the safe use of AI by minors, AB 1064 imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.”
Alongside vetoing AB 1064, Governor Newsom signed SB 243 into law. This law requires developers of “companion chatbots” to create protocols that prevent their models from creating content related to suicidal ideation, self-harm, or suicide, along with directing users to crisis services as needed.
When signing SB 243, Governor Newsom stated:
“Emerging technology like chatbots and social media can inspire, educate, and connect, but without real guardrails, technology can also exploit, mislead, and endanger our kids.”
The knowledge.
Alongside these two bills, California has been more active recently when signing new AI and tech-related bills into law. In late September, Governor Newsom signed SB53 into law, which mandated substantial requirements for frontier AI models.
SB53, also known as the Transparency in Frontier AI Act, is one of the largest AI laws passed at the state level. The bill requires the following of AI developers:
- Publish Governance frameworks and transparency reports
- Establish greater protections for whistleblowers
- Create new mechanisms for reporting critical safety incidents
- Develop a public computing cluster.
When signing the law, Governor Newsom stated that SB53 would act as a blueprint for other states and that the bill would be critical to helping establish a broader federal framework.
SB 53 was drafted after Governor Newsom vetoed a stricter law in 2024, known as SB 1047, or the Safe and Secure Innovation for Frontier AI Models Act. While SB 1047 was similar to SB 53, the older version was significantly stricter. In the previous version, the law mandated the following:
- Imposing pre-training requirements on developers.
- Requiring annual audits by independent third-party assessors.
- Mandating full shutdown capabilities for covered models.
- Strict 72-hour reporting window for safety incidents.
- Significant penalties for computing costs.
Though the bill did see wide support, it was ultimately vetoed by Governor Newsom. At the time, Newsom emphasized that the bill was not based on an empirical trajectory analysis of AI systems and capabilities.
The impact.
Governor Newsom’s recent decisions illustrate California’s increasingly proactive approach to developing AI regulation, one that aims to balance innovation with improved safety measures. Given California’s influence on national tech policy, these moves could significantly shape how other states and the federal government craft their own AI rules.
For California AI developers, people should take time to understand these major laws and their new requirements. By understanding and accounting for these new regulations, organizations can ensure that they account for any changes and avoid any unnecessary fines and penalties associated with these new measures.
Google proposes more changes to avoid an EU fine.
The news.
Google has offered to make further changes to its search results engine in an effort to avoid a significant European Union (EU) fine. With these proposed changes, Google stated that it would “create the opportunity for each vertical search service (VSS) to show its own box on Search.” Furthermore, Google emphasized that “a VSS box will be populated with results from that VSS inventory.”
Google announced these proposed changes after receiving criticisms regarding the company’s earlier proposal. These criticisms emphasized that third-party VSSs and Search should be more equal in terms of functionality and features.
A Google spokesperson also released a statement after proposing these changes, saying:
“We remain concerned that any further changes to Search would prioritise the commercial interests of a small set of intermediaries over European businesses who want to sell directly to their customers.”
The knowledge.
These proposed changes follow Google’s previous proposal in July to make changes to Search to avoid an EU fine. This previous proposal, known as Option B, involved Google creating an additional box that would include free links to suppliers, which include hotels, restaurants, and other services. When proposing Option B, Google stated that it “provides suppliers opportunities while not creating a box that can be characterised as a Google VSS.”
These changes were brought about after Google came under increasing scrutiny earlier this year. This scrutiny came after the European Commission alleged that Google was unfairly favouring its own offerings, such as Google Shopping, Google Search, and other similar services over competitors. The Commission claimed that the company had violated the EU’s flagship Digital Markets Act (DMA).
For context, the DMA is one of Europe’s most influential digital laws, which “establishes a set of clearly defined objective criteria to identify ‘gatekeepers.’” Under the DMA, gatekeepers are required to abide by a series of significant regulations or face major financial penalties. Currently, there are six designated gatekeepers, including Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft.
The impact.
While it is unclear whether or not these proposed changes will be accepted by the Commission or if they will lower any potential fines, it is clear that Google will likely not make any additional changes to Search.
Given Search’s significant role in digital markets, businesses should monitor these changes closely and understand how they could impact visibility and traffic. While the Commission could request more or different adjustments, understanding the proposed changes will help firms prepare accordingly.
Highlighting key conversations.
In this week’s Caveat Podcast, our team covered two developments. The first story involved OpenAI’s SORA model, which can create videos from text with increasingly realistic motion and sound. The conversation centered around how this and similar models are being used to impersonate people and the implications of these impersonations. Additionally, our team discussed a recent report published by Taiwan assessing Chinese cyber operations. The report detailed how attacks have increased by 17% YoY, now averaging 2.8 million attacks daily.
Like what you read, and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other noteworthy stories.
New York City is suing several major social media platforms.
What: New York City filed a lawsuit against major social media platforms, alleging that they fuel a youth mental health crisis.
Why: Last week, New York City filed lawsuits against Facebook, Instagram, TikTok, and YouTube. The over 300-page lawsuit reads:
“Youth are now addicted to Defendants’ platforms in droves, resulting in substantial interference with school district operations and imposing a large burden on cities, school districts, and public hospital systems that provide mental health services to youth.”
Further, the city argues that these platforms have created a “public nuisance” and accuses them of negligence.
The IMF says countries lack a regulatory and ethical foundation for AI.
What: IMF Chief Kristalina Georgieva warned that countries do not have the systems in place to handle AI’s rapid proliferation.
Why: On Monday, Chief Georgieva raised her concerns about AI and emphasized the need to “ring the alarm bells.” Georgieva spoke at the annual IMF and World Bank meetings, stating that she was “quite worried” about the gaps between the readiness of advanced and low-income economies for AI and how this technology will only exacerbate the challenge for developing countries to close this gap.
Alongside these concerns, Georgieve emphasized that the IMF was urging developing nations to focus first on the prerequisites and to utilize the IMF’s AI preparedness index to assess a nation’s readiness to utilize the new technology. The index assesses a nation’s existing infrastructure, labor and skills, innovation, and regulation and ethics.\
In her statements, Georgieve said that “where the world is falling shortest is on regulation and ethics.”
