Policy Deep Dive: Preemption
N2K logoJul 10, 2025

Policy Deep Dive: Preemption

Policy Deep Dive: Preemption.

In this special policy series, the Caveat team is taking a deep dive into key topic areas that are likely to generate notable conversations and political actions throughout the next administration. This limited monthly series focuses on different policy topic areas to provide you with our analysis of how this issue is currently being addressed, how existing policies may change, and to provide thought-provoking insights. 


For this month's conversation, we’re focusing on Preemption. In the reconciliation bill, Republican lawmakers are considering using preemption as a way to put a moratorium on all state AI laws for the next ten years.


To listen to the full conversation, head over to the Caveat Podcast for additional compelling insights. 

Key insights.

  1. Federal AI moratorium. Despite states passing numerous AI laws, federal lawmakers are aiming to undo all this progress.
  2. The debate over preemption. While this moratorium gained notable attention, this is not the first time that Congress has attempted to use preemption.
  3. The future of AI laws. Even though this moratorium did not pass Congress, federal AI legislation is gaining momentum.

The battle over AI regulation.

While states have led the way when creating AI policy, the federal government sought to undo all these efforts.

As artificial intelligence (AI) has continued to proliferate across nearly every business sector, concerns surrounding its risks have not only persisted but have continued to grow. Despite concerns surrounding AI hallucinations, privacy exposures, copyright infringement, and high-risk systems, the federal government has been unable to pass any comprehensive bills that effectively address these worries. Thus, in the absence of federal oversight, state legislatures have filled the void by passing dozens of laws.

While numerous laws have been passed, some of the highlights include:

  • Colorado’s SB 24-205: A 2024 law that defined key terms and outlined developer and deployer responsibilities, including mandating impact assessments, disclosing foreseeable risks, and increasing transparency requirements.
  • California’s AB2013: A 2024 law that requires AI developers who deploy their AI services or systems within California to post details on how data was used to train their AI system or service, including requirements such as high-level summaries of used datasets.
  • California’s SB 942: A 2024 law that requires covered providers to make available a free AI detection tool for users and would require these providers to offer the user an option to include a manifest disclosure in image, video, or audio content that identifies it as AI-generated content. Furthermore, this disclosure must be clear, conspicuous, appropriate, and understandable to a reasonable person. Lastly, the bill would also make covered providers that violate the bill liable for a civil penalty for each violation. 
  • Tennessee’s HB 2091 (The Ensuring Likeness, Voice, and Image Security Act): Also known as the ELVIS Act, this 2024 bill targets AI-deepfakes and prohibits individuals from using AI systems to mimic another’s image, voice, or likeness without explicit permission.
  • Utah’s SB149 (The AI Policy Act): A 2024 bill that imposed restrictions on “regulated occupations,” with special emphasis on the healthcare industry, requiring these professions to prominently disclose the use of AI in advance of rendered services. Other professions are required to disclose their AI usage, but only when asked by a consumer. Additionally, the bill also allows AI developers and deployers to benefit from regulatory mitigation, including waived civil fines, during pilot testing phases.

This small sample reflects a broader effort by state legislatures to address AI and its various risks. However, despite the progress that states have made, federal lawmakers recently and controversially attempted to undo their work. 

Though the 2025 reconciliation bill has recently passed both chambers of Congress, one of the most notable provisions that was included was a ten-year moratorium that would have nullified nearly all existing state AI laws and would have prevented new ones from being passed for the moratorium's lifespan.

For greater context, the moratorium was introduced in the House's version of the reconciliation bill, which passed the House in a 215-214 vote in May 2025. After gaining increased notoriety, Representative Russ Fulcher argued that this provision would help provide greater clarity. Representative Fulcher commented on the provision’s necessity, stating that “a patchwork of various state laws is not good for innovation, for business or consumers, and that is what we’re trying to avoid.” Alongside lawmaker support, technology companies also expressed their support, emphasizing how the contradictory regulations found from state to state would create greater confusion and stifle innovation.

After passing the House, the provision moved to the Senate, where it continued to gain more attention. Senator Ted Cruz took measures to rewrite the moratorium to ensure the Senate Parliamentarian would approve the measure. Although the House passed the measure, and the Senate Parliamentarian approved its inclusion, the moratorium ultimately failed in the Senate by an overwhelming 99-1 vote. However, while this measure failed to pass, there are two notable conversations that have emerged from this effort. The first of which is the legality of the moratorium, and the second involves what is next for federal AI policy.

Thinking Ahead: 

If the moratorium had been passed, what would have been the impacts on AI developers, deployers, and users?

Preemption.

While this measure failed, the legal basis to pass the measure has been attempted before to varying degrees of success.

Despite this measure failing to pass Congress, this is not the first time that the federal government has used the premise of preemption. For context, preemption is a federal power that allows the federal government to override state laws on various issues under the Constitution’s Supremacy Clause. This clause emphasizes that federal law is the “supreme Law of the Land.” However, despite the Constitution empowering the federal government to supersede state laws, many of these cases have been significantly debated and have reached the Supreme Court to varying degrees of success.

Since 2000, there have been dozens of cases that have reached the Supreme Court regarding preemption. However, some of the more notable ones have included:

  • Arizona v. United States (2012): An immigration case where the Court ruled that the federal government has broad and undoubted power over immigration and alien status and that the Supremacy Clause gave Congress power to preempt Arizona’s state laws.
  • Virginia Uranium, Inc. v. Warren (2019): A ruling that found the Atomic Energy Act did not preempt Virginia’s prohibition on uranium mining, allowing the state to continue its mining ban.
  • Wyeth v. Levine (2009): A case where the Court held that Vermont’s tort laws, which imposed stricter restrictions on drug warning labels, were preempted by the Food and Drug Administration’s approval of the text on the original label.

While these are only a small selection of preemption cases, their conflicting natures represent this legal gray zone, one that certainly would have been challenged if the AI moratorium had been passed.

Thinking Ahead: 

How might a second attempt to pass a similar measure be changed to garner more federal support?

Federal AI policies.

While no comprehensive federal AI policies have been passed, the moratorium will likely spur greater momentum.

Though the moratorium may have failed for now, the federal approach to AI regulation has already begun to shift under the Trump administration. Whereas the former Biden administration managed AI through executive orders, placing greater restrictions and mandating agencies to be proactive about governance, the Trump administration has taken a laissez-faire and industry-driven approach.

Since taking office, President Trump has already begun to commit to this deregulation approach. President Trump rescinded former President Biden’s Executive Order (EO) 14110, while replacing it with EO 14179. While such policy changes are common with new administrations, this instance was more than just a small pivot, as it marked a fundamental shift in federal AI policy.

EO 14179 established a new AI policy that would “sustain and enhance America’s global AI dominance to promote human flourishing, economic competitiveness, and national security.” Alongside this new policy, the order also directed all federal agencies to identify and review any actions taken related to AI under the previous administration to determine whether or not they were hampering this new agenda. Lastly, the order also mandated that agencies begin developing new policies to execute on this agenda.

President Trump’s focus on removing AI’s innovation barriers largely aligns with his broader deregulation philosophy. However, in the absence of robust federal polices, states have continued to take the lead, creating a growing patchwork of AI laws that developers, deployers, and users must navigate. 

Given this deregulatory stance, it is not surprising that federal lawmakers have turned to preemption as a strategy to override state-level efforts. While the measure failed, the moratorium reflected a broader federal push to unburden AI developers and deployers from the increasingly fragmented and restrictive regulatory landscape. With these dynamics in place, the coming months are likely to bring renewed legislative activity.

Thinking Ahead: 

How would a deregulated AI space change how AI is developed and deployed, and what risks would become more impactful?

A renewed push on AI.

Once Congress returns to session, a new push will likely be seen that addresses AI.

The failed AI moratorium was more than a one-time rejected policy. Rather, it was a preview of a growing debate over the future of AI policy within the United States, and what roles federal and state lawmakers will have. With states leading the current effort to manage AI, the federal government will have to eventually decide where it fits into this situation, whether it supports, undermines, or continues to ignore these state policies. 

However, given this most recent effort coupled with the Trump administration's stance on deregulating AI, this debate is far from over. Whether this next effort comes through a renewed bipartisan consensus or is implemented unilaterally is still unclear. What is clear is that the next phase of AI governance and regulation will be a highly contested and surely involve court rulings, especially if preemption is used as a legal basis.

For now, AI developers and deployers will need to continue navigating the growing state-led regulatory framework in the absence of federal leadership, and AI users should understand what a deregulated AI ecosystem would look like and the impacts it could have on safety, accountability, and privacy.

Thinking Ahead: 

What other measures could Congress consider to address AI?