Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,450 words, this briefing is about a 7-minute read.
At a glance.
- Microsoft resolves Teams investigation.
- New AI bill heads to Newsom’s desk.
EU’s Teams investigation resolved.
The news.
Last week, the European Union (EU) announced that it had resolved its investigation into Microsoft’s Teams platform, Microsoft’s messaging and meeting app. The investigation was resolved after Microsoft submitted a series of proposed platform changes.
These changes include unbundling Teams from both Microsoft 365 and Office 365. Further, the firm will offer its business software products, i.e., Word, Excel, and Outlook, separately from Teams at a reduced price. Additionally, existing Microsoft customers will be allowed to switch to this new package.
When announcing the resolution, Teresa Ribera, the regional government’s antitrust chief, stated:
“Today’s decision therefore opens up competition in the crucial market, and ensures that businesses can freely choose the communication and collaboration product that best suits their needs.”
The knowledge.
The EU’s investigation was originally launched in 2023 and focused on whether Microsoft had violated anticompetitive practices by bundling its communication product, Teams, with its business offerings. The investigation was launched after Slack and Alfaview filed complaints about Microsoft’s business practices.
When announcing this investigation, the Commission stated that it was specifically concerned that Microsoft was “abusing and defending” its position by restricting competition. More specifically, the Commission stated that:
“Microsoft may grant Teams a distribution advantage by not giving customers the choice on whether or not to include access to that product when they subscribe to their productivity suites and may have limited the interoperability between its productivity suites and competing offerings.”
In 2024, the Commission determined that Microsoft did restrict competition and that Teams had an “undue competitive advantage” by tying its products together. To avoid these fines, Microsoft agreed to the unbundling requirements, though it remains unclear how much this will affect Teams’ dominance in the EU market.
The impact.
Microsoft’s concessions mark a potentially major shift in how its software is sold in Europe, and the effects could extend beyond just impacting Teams. By separating Teams from Microsoft’s productivity suite, the EU hopes to ensure that the markets are fairer and competitive.
For Microsoft customers, this unbundling could introduce both opportunities and uncertainties. These changes could give organizations greater flexibility when choosing a collaboration service and could reduce costs for utilizing these services. However, these changes could also introduce greater administrative burdens on organizations regarding new licensing, integration, and pricing structures.
While unbundling Teams will not drastically change the services marketplace position in a short period of time, it could result in a new market landscape that is both more competitive and innovative.
California state legislature passes AI bill.
The news.
Over the weekend, the California state legislature passed a new law regulating artificial intelligence (AI). This new AI law focuses on improving the safety measures around the emerging technology. More specifically, the bill would mandate that AI developers disclose their safety testing regimes and certify that they are following them.
SB53, also known as CalCompute, was originally introduced in January 2025; however, it has since been amended following feedback from both other lawmakers and private industry leaders. The bill's author, state legislator Scott Wiener, described the law as one that would:
“[Require] large AI labs to be transparent about their safety protocols, [create] whistleblower protections for [employees] at AI labs & [create] a public cloud to expand compute access.”
State Senator Wiener was the same lawmaker behind California’s major AI bill that was vetoed by Governor Newsom last year.
The knowledge.
SB1047, which was known as the Safe and Secure Innovation for Frontier AI Models Act, was one of the major AI bills in 2024. Even though the law failed to pass, it was significant and became one of the most notable AI bills of the year.
For context, SB 1047 aimed to significantly regulate large-scale AI models. More specifically, the bill required the following:
- Creating broad definitions of “covered models” under the bill.
- Requiring developers to implement controls to prevent “critical harms.”
- Ensure developers are implementing rigorous testing, assessment, reporting, and audit obligations.
- Creating greater whistleblower protections for reporting noncompliance.
Though many supported the law, there were notable criticisms that it was both overly broad when defining covered models and too restrictive. When vetoing the bill, Governor Newsom stated that it was “a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities.”
In contrast to SB1047, the latest bill, SB53, lowers regulatory requirements, especially for smaller developers. The measures now require major “frontier” AI models developed by smaller companies, or companies with less than $500 million in annual revenue, to disclose high-level details on their safety testing measures. Further, the larger the companies are, the more detailed the reports need to be.
These changes have resulted in some of the pushback from last year waning, alongside gaining some notable support for this bill, including from Anthropic. In Anthropic’s announcement, the major AI provider stated:
“Our support for this bill comes after careful consideration of the lessons learned from California’s previous attempt at AI regulation (SB1047).”
Some of the positive impacts that Anthropic listed that would come from this bill include:
- Developing and publishing safety frameworks for managing risks.
- Requiring public transparency reports.
- Requiring developers to report safety incidents within a set time frame.
- Creating whistleblower protections.
- Holding organizations accountable for violations of frameworks.
The impact.
While it is unclear whether or not Governor Newsom will sign SB53, the bill will likely garner as much attention as last year’s proposal. If passed, it would impose some of the most tangible AI requirements to date.
For Californian AI developers, this law would impose new compliance requirements and costs, especially those centered around managing safety. AI companies of all sizes should take time to understand these potential requirements and which aspects will apply to their business to avoid any unnecessary penalties.
For the wider AI industry, SB53 could set an important precedent. If other states follow California’s lead by imposing similar transparency and safety requirements, it could result in a more secure overall AI environment, but would also contribute to a patchwork regulatory environment for AI developers. While federal legislation could help alleviate the growing complexity of state AI laws, so far, no federal measures have been passed.
As with data privacy, California’s efforts have the potential to significantly influence how other states, and potentially the federal government, approach AI regulation.
Highlighting key conversations.
In this week’s Caveat Podcast, our team covers two major stories. The first revolves around California lawmakers attempting to pass a new major AI bill, which would establish a new national precedent for the technology. With this AI safety bill passing through the state Congress, the bill now heads to Governor Newsom’s desk. The second story revolves around examining two deals between the US and the United Arab Emirates (UAE). These deals involve both a major crypto investment and AI.
Like what you read, and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other noteworthy stories.
Chegg settles FTC lawsuit.
What: Chegg Inc. to pay $7.5 million to settle a US Federal Trade Commission (FTC) lawsuit.
Why: On Monday, Chegg Inc. agreed to settle the FTC’s claims that the company made it difficult for users to cancel their subscriptions.
In the FTC’s complaint, the agency alleged that Chegg hid its cancellation option behind multiple menus on its website. Furthermore, the FTC cited internal emails from Chegg that showed that the employees knew the cancellation process was both confusing and difficult.
A Chegg spokesperson stated that while the company disagrees with the agency’s allegations, it settled the suit to prevent a prolonged trial.
Vietnam investigates cyberattack.
What: The Vietnam creditor database is targeted by a cyber attack.
Why: On Friday, a Vietnamese database was targeted by hackers. The incident targeted Vietnam’s National Credit Information Center (CIC), which is managed by the State Bank of Vietnam. This database stored sensitive information, including general personal details, credit payment history, risk analysis, and credit card data.
When announcing this attack, Vietnam’s cybersecurity agency stated that their “initial investigation indicated signs of unauthorised access aimed at stealing personal data, with the extent of the breach still being assessed.”
