At a Glance.
- FCC considering requiring AI disclosure in campaign ads.
- A comprehensive look at where general-purpose AI regulations currently reside.
FCC Chair considering mandating AI disclosure in campaign ads.
The News.
On Wednesday, the Federal Communications Commission (FCC) Chairwoman, Jessica Rosenworcel, stated that she introduced a new proposal that would require the disclosure of content created by artificial intelligence (AI) for political ads aired on radio or TV. If the measure were adopted by the commission, it would mark the first time that the federal government has regulated the use of emerging technology in politics. Additionally, this new regulation would be applicable in the upcoming 2024 elections in November. With this proposal, Rosenworcel stated that “as [AI] tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used.”
While the proposal would not ban the use of AI in political advertisements, it would regulate its usage and require transparency for both candidate and issue ads. However, while the proposal would require disclosure on both radio and TV ads, it would not apply to any online ads or ads on streaming services.
The Knowledge.
While this latest proposal still needs to be voted upon, this move follows another from earlier this year where the FCC banned robocalls from using AI-generated voices. That ban came after a political consultant used AI to impersonate President Biden in the New Hampshire Primaries earlier this year to convince voters not to participate in that election. With this new proposal, the FCC is looking to expand upon its initial actions before the general elections take place in the fall.
This proposal follows other regulatory efforts taken in Congress after Senators Amy Klobuchar and Lisa Murkowski introduced a bill that would require similar disclosures when using AI in political advertisements. This bipartisan legislation, also known as the AI Transparency in Elections Act of 2024 was placed on the Senate Legislative Calendar under the General Orders Calendar last Wednesday and is awaiting further action.
These latest efforts come after concerns have been routinely raised regarding how AI can be misused in political advertisements. Policy strategists, like Nicole Schniedman, have highlighted how generative AI could be misused to make it more efficient and effective in spreading disinformation and voter suppression. Coupling these concerns with the AI robocalls in New Hampshire and another instance where an entirely AI-generated ad was used to depict a dystopian future under a second Biden administration, lawmakers are taking these concerns seriously as they attempt to manage the technology before it can be significantly abused.
The Impact.
While this proposal still must be voted upon before any regulatory impacts will be enforced, this move as well as with the recent actions taken regarding the AI Transparency in Elections Act demonstrate that federal lawmakers recognize these concerns. If either effort is passed, this would require political campaign advertisers to reshape how they utilize AI when creating political ads.
However, until either of these motions are passed, US citizens should remain vigilant regarding any advertisement they see as AI could have been used to spread misinformation or mislead voters. Voters should continue to perform their due diligence regarding researching issues and candidates to develop informed opinions on the subject matter.
A Study on Regulating General-Purpose AI.
The Publication.
In a comprehensive research paper published by the Brookings Institution, Benjamin Cedric Larsen and Sabrina Küspert take a deep dive into the current state of general-purpose AI regulations and policies. Given the rapid speed at which AI is developing and changing, policymakers have had difficulties keeping pace to regulate the emerging technology properly as new developments and changes are occurring monthly if not weekly at times. Larsen and Küspert highlight this problem and are using this publication to outline these developments and examine how the European Union (EU) and the United States (US) are working to manage the technology and how these two major governments compare to one another.
Beginning with the EU, Larsen and Küspert highlight how the regional governing organization has positioned itself to be the frontrunner for regulating AI, especially after the European Parliament approved the EU AI Act earlier this year. While the act is still waiting to be formally adopted and would come into effect a year after its ratification, it has been unanimously supported by representatives from all member states in the EU. Larsen and Küspert remarked how each iteration of this AI Act has reflected a deeper understanding of the potential benefits and risks related to AI as well as the need to develop more rules related to general-purpose AI models. The researchers highlight how the European Commission mentioned that powerful AI models could cause significant accidents, spread misinformation, and/or be used in criminal activities if not properly managed. Aside from passing the AI Act, the EU is also aiming to establish a dedicated European AI Office that will be tasked with enforcing the various regulations related to general-purpose AI models, fostering greater international cooperation, and stimulating the development and us of trustworthy AI sources. With each of these actions, Larsen and Küspert emphasize how the EU has been taking numerous, proactive steps towards effectively managing and regulating AI with the goals of creating trustworthy AI and a sound legal framework.
Regarding the US, Larsen and Küspert discussed how until recently, the US had taken a notable hands-off, or laissez-faire, approach to managing AI by having no centralized framework. Instead, the US has relied on a fragmented set of regulations being managed by different federal agencies working independently from one another. However, with President Biden signing Executive Order 14110 last October, the administration implemented its most comprehensive approach to AI governance yet by setting new standards for AI safety and establishing a strong emphasis on security rights related to privacy, civil liberties, workers' rights, usage, and innovation processes. Larsen and Küspert remarked how this executive order has unified the previously disjointed efforts into a more singular and comprehensive approach that better manages AI governance and centers on “dual-use foundation models” that could be a risk to the economy, the public, and national security. However, despite the US taking a more unified approach with this recent Executive Order, Larsen and Küspert remarked how this order could be easily revoked or altered by the next administration if no comprehensive laws are passed by Congress to complement the order.
The Impact.
After discussing both the EU’s and US’s approaches to managing the use of general-purpose AI, Larsen and Küspert began a comparative analysis of the two strategies. The two remarked that the US’s order functions more as an effective outline for federal agencies to follow but does not have any true regulatory impacts; whereas, the EU’s AI Act is far more direct by applying regulations to general-purpose AI developers operating within the EU. Larsen and Küspert remark that while both center on managing the development and use of AI and emphasize greater security and transparency, the two greatly differ in their respective scopes.
Larsen and Küspert conclude their publication by discussing the importance of greater international cooperation related to AI governance. The two highlight that while the EU and the US have utilized different strategies for managing and governing AI, the two have recently come together, as well as with several others, at a G7 conference in September 2023, to create an AI Code of Conduct. While non-binding, the new document sets a strong foundation for promoting greater international cooperation. Larsen and Küspert discuss how international cooperation is critical to managing AI to create a greater global consensus regarding AI governance and to amplify more diverse voices, especially from underrepresented areas. Throughout their publication, Larsen and Küspert emphasize that while the US and EU have taken critical steps to start better managing general-purpose AI models, more efforts are needed at the international level to address the opportunities and challenges related to AI while also ensuring that AI governance frameworks are both effective and equitable on a global scale.
Other Noteworthy Stories.
Colorado Governor signs new AI regulation bill.
What: Colorado Governor, Jared Polis, has signed a new AI bill into law that will impose new regulations on AI developers.
Why: On Monday, Governor Jared Polis signed a new bill into law that will require AI developers to avoid discrimination in high-risk systems. Specifically, the law will add requirements to AI developers requiring them to “use reasonable care to avoid algorithmic discrimination.” Additionally, it will also require developers to disclose information to disclose information about their systems to regulators as well as the public.
While the governor did acknowledge he had reservations about the bill, he encouraged lawmakers and stakeholders to work together to provide improvements to the bill before its rules go into effect in 2026.
US SEC updates customer data hacking rules for Wall Street.
What: The US Securities and Exchange Commission (SEC) has approved new rules aimed at ensuring investment companies and others work to respond to customer data theft.
Why: Late last week, the SEC unanimously approved new amendments to rules originally introduced in 2000. These rules are aimed at ensuring that investment companies on Wall Street work better to detect and respond to customer data theft. With these new rules adoption, the SEC Chair, Gary Gensler, stated that “over the last twenty-four years, the nature, scale, and impact of data breaches has transformed substantially.”
Companies impacted by these updated rules will need to be in compliance within eighteen months to two years or face regulatory consequences.