Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,500 words, this briefing is about a 7-minute read.
At a Glance.
- Texas mandates age verification enforcement on Google and Apple application stores.
- House reconciliation bill would ban state AI regulation efforts.
Texas signs a law enforcing age verification requirements.
The news.
On Tuesday, Texas Governor Abbott passed a new law, which requires both Google and Apple to verify the age of their users in application stores. Set to go into effect on January 1, 2026, Senate Bill 2420 mandates parental consent for users under eighteen to download applications or make in-application purchases. The bill has already sparked a backlash from industry stakeholders.
Kathleen Farley, the vice president of litigation for the Chamber of Progress stated:
“A big path for challenge is that it burdens adult speech in attempting to regulate children’s speech[, and] that there are arguments that this is a content-based regulation singling out digital communication.”
Kareem Ghanem, the senior director of government affairs and public policy at Google, commented:
“We see a role for legislation here. It’s just got to be done in the right way, and it’s got to hold the feet of Zuckerberg and the social media companies to the fire because it’s the harm to kids and teens on those sites that’s really inspired people to take a closer look here and see how we can all do better.”
Apple, in a public statement, warned:
“If enacted, app marketplaces will be required to collect and keep sensitive personal identifying information for every Texan who wants to download an app, even if it’s an app that simply provides weather updates or sports scores.”
Alongside this latest bill, another Texas state bill aims to restrict social media applications for users under the age of eighteen. This bill has been passed by the State House and awaits a vote in the State Senate.
The knowledge.
With this law, Texas joins Utah as the second state to pass age verification requirement laws for application stores. Utah’s SB 142, otherwise known as the App Store Accountability Act, is similar to Texas’s law. Utah’s law requires major app store providers to verify ages and secure parental consent for those under eighteen before allowing application purchases or downloads. The bill also mandated that the age verification data be encrypted and used only for the purposes outlined in the law. While these two bills are the only two that have been successfully passed at the state level, other states and federal lawmakers have attempted to pass similar measures, reflecting the growing bipartisan scrutiny of the tech industry’s impact on minors.
One high-profile example of this scrutiny involved Florida’s SB3, better known as the Online Protections for Minors Act. After SB3 was signed into law, the bill went into effect at the beginning of 2025. The bill implemented bans on social media accounts for children under fourteen and required parental permission to make an account for those fourteen and older. When signing the law, Florida lawmakers expressed their concerns regarding social media's harmful effects such as addictive features, exposure to harmful content, and data privacy issues.
The impact.
While Texas’s law is likely to face legal challenges, it contributes to the growing effort to regulate Big Tech. These state efforts are likely to continue to build pressure for a more unified federal response, especially as more states join the regulatory effort.
For parents, these laws provide new rights and responsibilities when it comes to managing children’s online activity. Parents should understand how these rules affect their children and what powers they grant to parents on a state-by-state basis.
For businesses, compliance with these state laws could become both complex and costly as new legislation is introduced and passed. Companies should monitor these various legislative developments to ensure their policies are up to date and that they are following data practices accordingly.
New federal bill would ban all state AI regulations.
The news.
Last week, the House of Representatives narrowly passed the federal government’s new reconciliation bill in a 215-214 vote, drawing significant attention. Among its numerous cuts and provisions, one of the standout proposals calls for a ten-year moratorium on state laws regulating artificial intelligence (AI) models, systems, or automated decision tools. This provision would include any existing legislation as well as future legislation.
Proponents of the measure argue that this moratorium addresses the growing patchwork of state laws, which has emerged over recent years, creating increasing confusion and uncertainty for both industries and regulators. Representative Jay Obernolte commented on the value of this provision stating:
“Right now, there are over a thousand bills on the topic of AI regulation pending in state legislatures across the country. Imagine how difficult it would be for a federal agency that operates in all 50 states to have to navigate this labyrinth of regulation when we potentially have 50 different states going 50 different directions on the topic of AI regulation.”
While the law would remove most state AI legislation, it does include some exemptions for measures that aim to remove barriers to AI development or regulations that promote AI’s usage.
The knowledge.
Although the new reconciliation bill must pass the Senate, the potential removal of all state AI legislation would be significant. To date, state legislatures have led the way in creating the United States AI framework. Notable examples include:
- California’s AB2013: Requires AI developers to disclose details on how data was used to train their AI system, including high-level summaries of used datasets.
- Tennessee’s HB 2091 (The Ensuring Likeness, Voice, and Image Security Act): Prohibits the use of AI systems to mimic another’s image, voice, or likeness without explicit permission.
- Colorado’s SB 24-205 (The AI Act): Regulates high-risk AI systems and imposes substantial obligations on AI developers and deployers to protect consumers from discriminatory practices.
These laws reflect the growing concern over AI’s societal impact and the bipartisan desire for greater oversight, especially in the absence of federal action.
While some have criticized this patchwork system, it is important to note that this framework formed in the absence of any meaningful federal AI policies. While some federal efforts have been attempted, mainly through executive orders or agency actions, Congress has been largely unable to implement any comprehensive legislation regarding AI. If this measure were to be passed by the Senate and subsequently signed into law by President Trump, significant questions would emerge regarding the future of AI legislation.
The impact.
As the Senate debates and potentially amends this new reconciliation bill, this provision will likely be a key topic for discussion given its significance. As it stands, it is unclear where many senators stand on the issue.
In the meantime, businesses, regulators, and consumers should closely monitor the bill’s progress and track this specific measure as it could impact many existing compliance practices and rights. The outcome may fundamentally reshape how AI is governed across the country.
Highlighting key conversations.
In this week’s Caveat Podcast, our team covers a recent court case where a judge rejected a claim that a chatbot had free speech rights. This lawsuit emerged when Character.AI developers were sued after their chatbot allegedly encouraged a teenager to commit suicide. With this order, the Judge has allowed the wrongful death lawsuit to proceed and has the potential to be a significant test for the constitutional rights associated with AI.
Like what you read, and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other noteworthy stories.
EU and US take down malware network.
What: European, American, and Canadian agencies took down a malware network and issued arrest warrants.
Why: On Friday, agencies took down over 300 servers worldwide and issued international arrest warrants against twenty suspects. Additionally, over 650 domains were neutralized and 3.5 million euros in cryptocurrency were seized. With this operation, authorities targeted “initial access malware,” which is used for the initial infection of a device or system.
This operation was originally started in May 2024 and has seized 21.2 million euros.
DOJ accuses Russian national of ransomware attacks.
What: The Department of Justice (DOJ) has unsealed charges against a Russian national for developing and deploying malware.
Why: Last Thursday, the DOJ charged Rustam Gallamov with leading a group of malicious actors in deploying Qakbot, which is software that infects devices with additional malware and conscripts the device into a botnet.
Alongside unsealing these charges, DOJ officials filed a complaint seeking the forfeiture of over $24 million in cryptocurrency and other funds.