9-minute read | 1,950 words
What to know this week
Meta and YouTube found negligent.
A jury found both social media companies liable for harms their platforms caused to a young user.
Judge blocks Pentagon reclassification of Anthropic.
A federal judge has blocked Anthropic from being labeled as a supply chain risk.
This week's full stories
Landmark social media case finds platforms negligent.
THE NEWS
Last week, both Meta and YouTube were found negligent by a jury for harms they caused to a young user due to their design features. With the ruling, Meta will be required to pay $4.2 million with YouTube being required to pay $1.8 million. For context, the case stemmed from a twenty-year-old woman suing the companies citing that their features, like infinite scroll and algorithmic recommendations led to her developing anxiety and depression.
With the jury’s ruling, the case marks a major win for plaintiffs as this lawsuit echoes hundreds of similar ones filed by teenagers, school districts, and state governments. More specifically, the ruling undermines a key defense used by these companies to avoid liability by citing Section 230 of the 1996 Communications Act. This section provides protections to online platforms from being held liable for content posted by others.
Advocates for greater social media accountability have reacted positively to the ruling. One of the plaintiff’s lawyers, Joseph VanZandt, stated:
“This is the first time in history a jury has heard testimony by executives and seen internal documents that we believe prove these companies chose profits over children.”
Both Meta and Google spokespeople pushed back on the results of the trial. A Meta spokeswoman stated:
“We respectfully disagree with the verdict and are evaluating our legal options.”
Jose Castaneda, a Google spokesman stated:
“This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.”
Notably, both TikTok and Snap were included in this California lawsuit, but both settled with the plaintiff before the trial started.
THE KNOWLEDGE
This ruling marks a significant shift in how courts have been traditionally handling these matters. Furthermore, this shift has continued to gain more momentum after another case had its verdict issued in New Mexico. In this case, the state attorney general filed a lawsuit against Meta for violating the state’s laws to protect users from child predators. There the jury ruled against Meta and stated that the company needed to pay $375 million in damages.
While Meta has already announced its intent to appeal the decision, these two recent verdicts alongside the series of similar lawsuits all set to begin over the coming months represent a potential major shifting point for how the courts have been approaching these matters.
These broader legal challenges are opening the door to a much larger debate surrounding the aforementioned Section 230. While there is no current expiration date for Section 230, there have been growing calls for Congress to reform it.
Over recent years, Section 230 has become controversial as opponents argue it is used by online platforms to avoid being held accountable for any harm users experience on their sites. Specifically, people have argued that these platforms use Section 230 to avoid having to remove harmful content and allow platforms to implement recommendation algorithms that reinforce biases that disadvantage the consumer.
Opponents have argued that algorithms should not be considered to be included in Section 230’s immunity. This change would then make social media platforms open to being held liable for the consequences their platforms caused to users through design features. If these changes were made, it would dramatically change how these platforms are designed and how users interact with them.
THE IMPACT
For decades, social media platforms have come under increasing scrutiny regarding the harms their sites cause to users, especially for minors. However, the debate has begun to shift away from what users are posting to rather how these platforms have been designed. Features, like infinite scrolls, autoplay, and recommended algorithms have begun to be seen increasingly as manipulation tools to drive constant engagement with disregard to user psychological harm.
This ruling is significant as it challenges and undermines social media platforms’ Section 230 defense, which to date has proved to be very legally sound. If this momentum continues with future cases and these rulings are not successfully appealed, it could open platforms up to a massive number of lawsuits from impacted users.
Moreover, this case and its peers could mark a turning point in how the United States (US) looks to handle social media platforms where their associated harms are treated as a consumer safety issue rather than a free speech one. This shift could have significant downstream effects where platforms are forced to make major product redesigns that disable some of these features or dramatically limit how these algorithms operate.
Judge blocks Anthropic’s reclassification.
THE NEWS
Last week, US District Judge Rita Lin blocked both Anthropic’s reclassification as a supply chain risk alongside blocking President Trump’s order that required all federal entities to cut contracts with the company. The ruling has a one week delay giving the Trump administration time to seek relief from an appeals court.
In Judge Lin’s order, she wrote:
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.”
Anthropic responded to the ruling stating:
“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government.”
While the Justice Department did not comment, it did indicate that it would appeal.
Outside of this suit, Anthropic has another ongoing lawsuit for this situation taking place in Washington, D.C.
THE KNOWLEDGE
These rulings are related to the massive fallout between Anthropic and the Trump administration. After agreeing to a $200 million contract with the federal government in summer 2025, the two began to work together until February 2026 where relations between the two soured.
According to the Pentagon, the fallout emerged after Anthropic refused to confirm that it would allow the government to use its AI systems for defense situations. However, Anthropic pushed back on these claims stating that the fallout instead emerged after the federal government could not assure the company that its AI systems would not be used in mass domestic surveillance systems and fully autonomous weapon systems.
This agreement eventually resulted in the Pentagon redesignating Anthropic as a supply chain risk. Additionally, President Trump signed an executive order instructing federal bodies to stop using any Anthropic AI systems.
These moves were not only unprecedented but also saw significant pushback from numerous AI business leaders. Google, Amazon, Apple, and Microsoft all publicly supported Anthropic’s subsequent lawsuits to overturn the redesignation with each expressing concerns about the retaliatory nature of the move.
THE IMPACT
While this ruling is a major win for Anthropic, the matter is not fully resolved. Outside of the Trump administration already signaling its intent to appeal the case, Anthropic also filed a similar suit in Washington, D.C. that has to be resolved.
Given the unprecedented nature of the move, it is unlikely that the Trump administration will emerge fully in control of the situation. However, if the two courts rule differently from each other or are successfully appealed, the trial proceedings could extend for some time.
This ruling restores Anthropic’s status and undermines the executive power that the administration attempted to claim. By restoring Anthropic's position, the courts have also restored dozens of Anthropic's contracts, which were put into limbo after it was labeled as a security risk.
This Week's Caveat Podcast: Reversing the risk label.
Dave Bitner and N2K’s lead analyst, Ethan Cook, sit down to discuss the recent judicial rulings involving Anthropic's lawsuits against the government. Additionally, the two look at the federal government's announcement where it is looking for public comment on a new federal insurance program designed to better address major cyber incidents.
OTHER NOTEWORTHY STORIES
WTO introduces world’s first digital trade rules.
What: World Trade Organization (WTO) members brought a digital trade rules agreement into force.
Why: On Saturday, members of the WTO side stepped adoption hurdles to bring the world’s first baseline for digital trade rules into force for consenting parties. At the summit, sixty-six members agreed to activate the deal within their nations alongside agreeing to pursue continued integration.
The United Kingdom’s Business and Trade Secretary Peter Kyle commented on the agreement, stating:
“As the first global digital trade deal, this will make trade cheaper, faster, and more secure for businesses around the world.”
The US did not sign up for the agreement, but it is currently being reviewed by the Trump administration.
This pack has been previously blocked twice by dissenting members. India has been one of the strongest opponents to the effort arguing that trade agreements should be adopted multilaterally by consensus.
MAR 28, 2026 | Source: Reuters
Taiwan probes 11 Chinese firms.
What: Taiwan has begun investigating eleven Chinese firms for allegedly poaching talent.
Why: On Monday, Taiwan began to investigate eleven firms they suspect of recruiting talent without approval. With this investigation, Taiwan’s Investigation Bureau claimed that these Chinese companies disguised their ownership by setting up shell firms or offices without authorization to recruit Taiwanese semiconductor engineering talent.
Currently, Taiwanese law prohibits Chinese entities from investing in specific areas of the semiconductor supply chain. These areas include chip design and other such as packaging, requiring oversight.
The companies under investigation include Huaqin Technology, Anker Innovations, Circuit Fabology Microelectronics Equipment, SG Micro, and Yangzhou Yangjie Electronic Technology Co Ltd.
MAR 30, 2026 | Source: Reuters
California’s new AI executive order.
What: California issued a new executive order regarding AI.
Why: On Monday, California Governor Gavin Newsom issued a state executive order centered on AI. Within the order, California has mandated the following:
- Companies will have to explain their safety and privacy policies to state regulators and agencies.
- Mandating that state officials creating AI videos place watermarks for consumers to be able to tell the difference.
- Allowing California to conduct its own risk assessments, outside of the ones conducted by the federal government, to determine if a contractor is a risk and needs to be potentially removed.
In a statement, Governor Newsom stated:
“California leads in AI, and we’re going to use every tool we have to ensure companies protect people’s rights, not exploit them or put them in harm’s way. While others in Washington are designing policy and creating contracts in the shadow of misuse, we’re focused on doing this the right way.”
MAR 30, 2026 | Source: The New York Times
Australia prepares to sue social media companies.
What: Australia is preparing to sue social media platforms for allowing minors to use their sites.
Why: On Tuesday, Australia announced that it was ready to file lawsuits against several of the largest social media platforms. Australia is alleging that these companies are ignoring the nation’s ban on minors being able to access social media platforms.
Communications Minister Anika Wells stated:
“We have spent the summer building that evidence base of all the stories that no doubt you have all heard … about how kids are getting around [the ban].”
Both Meta and Snap have pushed back on these allegations emphasizing their commitment to complying with the ban alongside Meta stating that the government’s own age-assurance technology found “natural error margins” for enforcing the age cutoff.
Under the current law, platforms are required to take steps to keep underage users off their platforms or face fines of up to $34 million per breach, which the government would have to pursue in court.
MAR 31, 2026 | Source: Reuters
