
Policy Deep Dive: The future of AI policy.
Policy Deep Dive: The future of AI policy.
In this special policy series, the Caveat team is taking a deep dive into key topic areas that are likely to generate notable conversations and political actions throughout the next administration. This limited monthly series focuses on different policy topic areas to provide you with our analysis of how this issue is currently being addressed, how existing policies may change, and to provide thought-provoking insights.
For this month's conversation, we’re focusing on artificial intelligence (AI) policy. We’ve seen substantial changes over the past several years and we assess what key topic areas still need to be addressed.
To listen to the full conversation, head over to the Caveat Podcast for additional compelling insights.
Key insights.
- States Lead the Way. Given the lack of AI-related federal legislation, states have led the US in its efforts to address the emerging technology and its various risks.
- A Diverse Set of AI Laws. Since states have paved the way for AI-related legislation, various states have addressed and prioritized different risks.
- A Lack of Federal Support. While dozens of states have taken steps to implement AI policies, the federal government has been unable to make any significant progress on this issue.
- Existing Policy Gaps. While states have taken action to address AI and better manage its risks, numerous areas still need stronger management.
Managing AI.
Given AI's sudden explosion across society, states and the federal government have struggled to manage the emerging technology effectively.
While the concept of AI models has existed for well over a decade, the past five years have seen a massive and rapid proliferation of this technology that shows no signs of slowing down. Although this emerging technology holds the potential to revolutionize many business sectors, these models pose a variety of immense challenges that still are not fully realized and need to be addressed.
However, despite this clear need, there has been little federal legislation to address and manage these technologies properly. The former Biden administration largely managed the technologies through executive actions rather than comprehensive legislation. Due to this lack of federal leadership, the responsibility of managing AI has fallen to the states, which have attempted to and, in some cases, passed various bills aimed at addressing many of the risks associated with AI.
Whether these bills have centered around managing deepfakes, mandating greater transparency, or creating AI task forces, these various state-led initiatives have laid the initial foundations for greater and more comprehensive AI legislation to take root. However, despite this strong foundation, the various state bills have resulted in a series of patchwork legislation that makes developing, deploying, and utilizing AI models more challenging than it would need to be otherwise.
Thinking Ahead:
How can states streamline legislation efforts if the federal government remains unable to pass impactful legislation?
The Landscape of State AI Policy.
Through a combination of various state bills, AI legislation remains inconsistent and patchworked heading into 2025.
With AI’s continuing rapid development, it should come as no surprise that lawmakers have attempted to address the emerging technology and ensure its ethical development, deployment, and use. However, before diving into what still needs to be addressed, it is critical to understand what has been regulated, how states and the federal government have contributed to these efforts, and what initiatives have failed.
As it stands currently, a majority of the United States’s (US) AI legislation has emerged from state legislatures. More specifically, in 2024, nearly 700 AI-related bills were introduced across forty-five states with 113 of these laws being enacted. By comparison, in 2023, state lawmakers only introduced a total of 191 bills. The five states that did not introduce any AI-related laws were Nevada, Montana, North Dakota, Texas, and Arkansas. Of the bills that were enacted, these laws focused on a variety of different issues related to AI including addressing deepfakes, election issues, model-level regulations, transparency, and government AI usage.
While it is not practical to cover every piece of state AI legislation, several standout laws are critical to discuss with perhaps the most important being Colorado’s AI law better known as the “Act.” SB 24-205 was passed in May 2024 and is widely considered to be the first comprehensive AI bill in the US. While the bill covers a variety of AI-related topics, the main issues it addresses are related to high-risk AI systems and impose substantial obligations on both AI developers and deployers to protect consumers from discriminatory practices.
While the bill will go into effect in February 2026, it was instrumental in defining key terms such as “high-risk AI systems” and “consequential decisions.” Furthermore, the Act defined key responsibilities for both developers and deployers, such as mandating impact assessments, disclosing foreseeable risks, increasing transparency requirements, and other similar measures.
Aside from this key law, some other notable state AI-related laws include the following:
- California’s AB2013: A 2024 law that requires AI developers that deploy their AI services or systems within California to post details on how data was used to train their AI system or service including requirements such as high-level summaries of used datasets.
- California’s SB 942: A 2024 law that requires covered providers to make available a free AI detection tool for users and would require these providers to offer the user an option to include a manifest disclosure in image, video, or audio content that identifies it as AI-generated content. Furthermore, this disclosure must be clear, conspicuous, appropriate, and understandable to a reasonable person. Lastly, the bill would also make covered providers that violate the bill’s provisions liable for a civil penalty for each violation.
- The Colorado Privacy Act (CPA): A 2023 bill that gave Colorado consumers the right to opt out of the processing of their data. Additionally, the CPA also required that data controllers conduct a data protection impact assessment to see if the processing of personal data creates a heightened risk of harm to a consumer.
- Tennessee’s HB 2091 (The Ensuring Likeness, Voice, and Image Security Act): Also known as the ELVIS Act, this 2024 bill aimed to target AI-deepfakes and prohibited individuals from using AI systems to mimic another’s image, voice, or likeness without explicit permission.
- Utah’s SB149 (The AI Policy Act): A 2024 bill that imposed restrictions on “regulated occupations,” with special emphasis on the healthcare industry, requiring these professions to prominently disclose the use of AI in advance of rendered services. Other professions are required to disclose their AI usage, but only when asked by a consumer. Additionally, the bill also allows AI developers and deployers to benefit from regulatory mitigation, including waived civil fines, during pilot testing phases.
- The Virginia Consumer Data Protection Act (VCDPA): A 2023 bill that set out rules related to profiling efforts and automated decision-making. More specifically, the VCDPA gives consumers the right to opt out of “profiling in furtherance of decisions that produce legal or similarly significant effects” regarding the consumer. Additionally, controllers are also required to perform data protection impact assessments for high-risk profiling activities. Texas passed a similar law to the VCDPA known as the Texas Data Privacy and Security Act.
Aside from this series of enacted bills, there is one other notable piece of legislation that despite its failure to be passed was critical. California’s SB1047, otherwise known as the Safe and Secure Innovation for Frontier AI Systems Act, was highly controversial and ultimately ended after Governor Newsom vetoed the effort. Originally introduced by State Senator Scott Wiener, SB1047 was a landmark piece of legislation that aimed to regulate foundation AI models and mandate various regulations on companies creating these technologies. Some of the efforts the law attempted to establish included the following:
- Creating definitions for terms such as “covered models” and “covered model derivatives.”
- Requiring companies to implement both technical and organizational controls to prevent models from causing “critical harms” including building in shutdown capabilities, cybersecurity protections, and safety protocols.
- Mandating rigorous testing, assessment, reporting, and audit requirements for AI developers.
- Mandating computing providers implement policies and procedures for customers that use computing resources sufficient to train a covered model.
- Prohibiting developers from stopping employees reporting noncompliance both internally and externally.
In addition to these requirements, the bill would have mandated the creation of a new division within the state’s Government Operations Agency. This division would have been tasked with implementing guidance, receiving certifications, and promoting model oversight. Supporters of the bill emphasized that it would help incentivize upstream risk management without being overly burdensome on the open model communities.
However, despite the bill passing the state legislature, it was still highly controversial, with a large group of major technology companies rallying against it. Many of these critics emphasized that California would hamper innovation and competitiveness by regulating development efforts rather than AI usage. Additionally, OpenAI criticized the effort, stating that California’s efforts to focus on national security should be the responsibility of the federal government rather than the states.
At the time, Governor Newsom vetoed the bill emphasizing that it failed to “take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data” among other concerns. However, despite his veto and criticism, Governor Newsom did emphasize that AI still needs to be regulated, and that “proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable.”
Thinking Ahead:
What major policy areas are states likely to address in 2025?
Federal AI Oversight.
Since federal lawmakers were unable to pass comprehensive AI legislation, short-lived executive actions have been used to address the gaps.
While many of these state bills are effective, the inherent difficulty associated with state laws is that they create at best a patchwork of series of laws. This dynamic could make it difficult for AI developers and deployers to effectively comply with these bills and make it more difficult for the consumer to have consistent protections nationwide. Normally, federal legislation would help address these concerns, but to date, there have been no comprehensive bills passed at the federal level and the only actions taken have largely been ephemeral executive efforts mandated by former President Biden.
Under former President Biden, his administration made several efforts to better regulate the AI environment. First, in 2023, former President Biden signed Executive Order (EO) 14110, which not only defined the Biden administration’s AI policy goals but also ordered federal agencies to take action to execute these various goals. Some of these goals included the following:
- Focusing on ensuring responsible AI development.
- Encouraging greater transparency and public input.
- Promoting more cooperative global cooperation with international partners.
- Establishing guidelines for AI safety and security and assessing impacts on various business sectors and their workers.
In conjunction with this order, former President Biden signed EO 14141. With this order, the Biden administration aimed to continue advancing the US’s leadership in AI infrastructure and innovation. More specifically, the order aimed to increase federal support when addressing AI data centers’ rapidly increasing energy needs. To accomplish these efforts, the EO enabled the following:
- Allowing the Department of Defense and Energy to lease federal sites to host AI data centers.
- Prioritizing that AI data centers be powered through clean energy sources such as solar, wind, and geothermal power sources.
- Mandating that companies utilizing leased sites purchase an “appropriate” amount of American-made semiconductor chips when developing AI models.
In addition to these orders, former President Biden also oversaw the creation of the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This framework, released in January 2023, was designed to ensure that AI systems were being developed and used in a way that minimized risks and promoted public safety. Within this framework, NIST outlined how organizations can better manage their AI models while simultaneously minimizing their inherent risks. From this framework, NIST stressed several critical principles:
- Governance: This principle centers around ensuring that both leadership and the organization have defined structures in place to oversee their AI systems, better manage risks, and establish clear accountability.
- Comprehensive Risk Management: A principle that ensures that risks are being effectively managed at all levels of the organization from an individual level to the planetary scale.
- Emphasizing Documentation: The framework routinely highlights the need for constant and consistent documentation ranging from detailing various roles and responsibilities to covering an AI model’s limitations.
- AI Trustworthiness: A principle that prioritizes ensuring that developers are transparent throughout the development process and that their reports are valid and reliable. Additionally, this principle emphasizes the importance of the relationship between trustworthy characteristics and its relation to the other aspects of the risk management framework.
Thinking Ahead:
If federal lawmakers remain unable to pass significant AI legislation, what steps can still be taken to help manage this technology?
Trump’s Role in AI.
As President Trump begins to enact his AI policies, his goals will center around domestic innovation and deregulation.
While the previous Biden administration was able to implement some AI policies, overall the efforts fell short in regards to implementing comprehensive federal AI legislation. Due to this lack of success, the Biden administration implemented his policies largely through executive actions, which while impactful can be easily undone by President Trump.
Already President Trump has committed to his deregulation agenda by rescinding former President Biden’s EO 14110 claiming that the order was hampering innovation and reducing the US’s global competitiveness. Aside from rescinding this order, President Trump also signed a new AI order, EO 14179, shortly afterward, which aimed to create a new policy that would “sustain and enhance America’s global AI dominance to promote human flourishing, economic competitiveness, and national security.” Additionally, the order also requires the immediate review and identification of any actions taken by federal agencies under EO 14110 that would be inconsistent with this new policy. Lastly, the order also mandates that an action plan be created and submitted to the President within 180 days that details how this new policy will be implemented. While President Trump’s AI agenda will largely center around deregulating AI, that does not mean President Trump will have no role in managing AI.
Given President Trump’s past emphasis on AI’s role in national security, it is highly likely that President Trump’s second administration will have a similar agenda. For context, in 2019, President Trump signed EO 13859 which prioritized addressing AI’s role in both economic and national security. This EO echoed some of the Biden administration’s policies, which prioritized supply chain security, improving domestic semiconductor chip manufacturing abilities, and effectively implementing AI in national security efforts.
Given President Trump’s past support for AI in national security, it is likely that he will aim to continue many of these policy efforts provided they do not overtly conflict with his deregulation agenda. As President Trump’s administration continues to design and implement AI policies, his role will likely pivot to a more laissez-faire approach where his efforts will center around how the federal government can harness AI while simultaneously allowing for states to regulate AI.
Thinking Ahead:
Throughout 2025, what major policies could the Trump administration consider that utilize AI?
Existing AI Policy Gaps.
Despite state efforts, there are still significant policy gaps that need to be addressed to effectively manage AI.
Given the federal government’s lack of emphasis on implementing comprehensive AI legislation, states will likely continue to lead AI regulation efforts. As state legislatures prepare to implement new AI policies, lawmakers will likely focus on addressing issues like AI transparency, cybersecurity risks, data privacy, model regulations, potential AI bias and discrimination, and potentially harmful impacts on labor markets.
As states aim to address these various gaps, states will likely aim to copy some of the legislation that has already been passed. For example, Colorado’s AI act, which imposed regulations and obligations on AI developers and deployers, will likely serve as a template for other states to implement their bills centered on improving accountability. Furthermore, while California failed to pass SB 1047, which would have dramatically overhauled how individual AI models are regulated, California’s legislators have already signaled their intent to reintroduce a similar bill. In the absence of federal legislation, states will likely replicate established AI legislation when working to regulate the technology. While this process will lead to an increasingly patchworked legislation landscape, there will be similarities found nationwide, especially from states that have similar views regarding regulation.
As lawmakers begin to craft new AI legislation, it will be critical for AI developers, deployers, and users to understand these various bills and the regulations they mandate. Given AI’s ever-growing importance in society, it is inevitable that lawmakers at all levels of the government will continue to craft, introduce, and pass legislation that aims to mitigate the technology’s inherent risks. Understanding the intricacies of the laws will enable AI stakeholders to be able to successfully harness the technology while simultaneously minimizing its downsides.
Thinking Ahead:
How can states shape future AI policy in the US?