Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,950 words, this briefing is about a 10-minute read.
At a Glance.
- California Governor Newsom signs three new AI bills into law.
- FEC forgoes new AI rulemaking processes.
Governor Newsom signs several new AI bills into law.
The News.
Last Friday, Governor Newsom signed three new bills into law, each centered around addressing artificial intelligence (AI). These three laws focused on how AI can be misused to create sexually explicit deep fakes. The three bills passed were SB 926, SB 942, and SB 981. These latest efforts mark the state’s most recent attempt to address how AI can be misused to generate fake images, audio clips, and videos of a person. Aside from making it illegal to use AI in this manner, these laws also put the responsibility on both AI developers and social media companies to prevent the technology from being used and shared in this capacity.
After signing these three bills, Governor Newsom stated that “we’re in an era where digital tools like AI have immense capabilities, but they can also be abused against other people.” Governor Newsom continued stating “nobody should be threatened by someone on the internet who could deep fake them, especially in sexually explicit ways.”
The Knowledge.
These three bills are each centered around stopping AI from being abused to make sexually explicit deep fakes. Each bill addresses this issue in the following ways:
- SB 926 makes it illegal to create and distribute sexually explicit images of a real person that appear real and could cause that person “serious emotional distress.”
- SB 981 mandates social media platforms create ways for users to report sexually explicit deep fakes of themselves and requires these companies to temporarily block the content to allow an investigation to take place.
- SB 942 requires AI-generated content to come with a disclosure for users to be more easily able to identify this type of content.
Aside from addressing sexually explicit deep fakes, Governor Newsom also signed two other AI-focused bills into law last week that were aimed at protecting various performers from others using their names, images, and likenesses without authorization. More specifically, the first bill centered around protecting actors and performers from binding contracts that allow AI to use their image in place of in-person work unless the performer has representation. The second bill protects an artist’s digital likeness even after their death.
However, despite Governor Newsom signing each of these bills into law, critics are still concerned about the lack of comprehensive AI legislation. While the California state legislature did pass the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act last month, Governor Newsom has yet to veto or sign the highly contentious bill. For greater context, SB 1047 is one of the few AI bills that has made significant progress toward being passed and has seen both criticism and support from major political figures. While the bill’s supporters, such as Elon Musk, have stated that legislation is needed to protect the public and minimize risks, critics, like Nancy Pelosi, have expressed concerns that the proposed bill is more harmful than helpful citing experts that stated the bill “would have significant unintended consequences that would stifle innovation and will harm the [United States (US)] AI ecosystem.”
Despite this bill being highly debated, the Biden administration has continued to advocate for both state and federal lawmakers to put forward a bill that properly addresses AI. Just last week, the administration announced its intention to host a major international AI summit in San Francisco in November. With this announcement, US Commerce Secretary Gina Raimondo stated that the conference will be the “first get-down-to-work-meeting.” While it is unclear what will happen with SB 1047, the administration hopes that this next conference will spurn greater AI momentum in the future.
The Impact.
With Governor Newsom signing all of these AI bills into law, the state has begun to make progress in addressing some of the major concerns surrounding how AI can be used as well as better defining who bears the burden of responsibility for the technology. However, despite this progress, comprehensive US AI legislation needs to be addressed whether that be through passing SB 1047 or another bill at the federal level.
For both AI developers and organizations that plan to implement the technology, people should be aware that the government is beginning to place greater emphasis on organizations to protect users and secure the technology. Organizations should take precautions and steps to ensure that their AI systems are secured and have ways to allow users to report inappropriate AI content to ensure that they will be compliant with legislation that is similar to the bills already passed in California. The general public should continue to remain vigilant and skeptical of the content that they see online and always verify sources to ensure that they are consuming real content rather than AI-created videos, posts, or audio files.
The FEC has decided to forgo any new AI rulemaking.
The News.
Last week, the Federal Election Commission (FEC) stated that the commission will not propose any new rules for how AI should be used in political advertising this year. With this announcement, the commission stated that they lack the authority to limit or stop the use of this technology in the elections. More specifically, the FEC voted in a five-to-one ruling to approve a compromise that instead issued an interpretive rule clarifying that AI use falls under existing regulations that bar fraudulent misrepresentation in advertisements.
After the vote, Democratic Commissioner Dara Lindenbaum stated that “four of us have been working together on this to make sure that we could give a clear answer to the public and to the requester in this petition when they asked if artificial intelligence…used in fraudulent misrepresentation per our statute applies.”
Already this vote has drawn criticism by the requester, Public Citizen, whose co-president, Robert Weissman, stated that “the anemic FEC seems to have forgotten its purpose and mission, or perhaps its spine.” The one dissenting vote, Sean Cooksey, stated “I just worry that [the ruling] will be misinterpreted, misunderstood, have a potential chilling effect on people who might think it’s prohibiting something new, when in fact it’s not.” Cooksey continued stating that “there’s nothing that is going to be made illegal by this interpretive rule…that isn’t already illegal.”
The Knowledge.
With this vote, the FEC has elected not to propose any new rules for how AI should be handled in political advertisements before the election. While this lack of decision may feel minor at first glance, this outcome will result in elections relying on voluntary actions to manage how AI can be used in the upcoming election rather than clearly defining what is and is not allowed.
For months now, concerns have continued to arise regarding how AI will be used ahead of the upcoming elections to sway voter opinions. Despite some instances related to fraudulent representation already being addressed, as seen when AI was used to mimic President Biden’s voice earlier this year, there remains a significant gray zone regarding how AI can be used in political advertisements as well as if advertisers need to specify if an advertisement was created with AI. Unfortunately, it is unlikely any greater clarification will come over the next few weeks since Congress has been unable to form a large enough consensus around any AI-focused bill to get close to passing before they take their next recess at the end of the month. Aside from Congress being unable to pass any comprehensive legislation, federal agencies have also faced challenges when it comes to passing new rulings on the matter. While the Federal Communications Commission is considering a rule centered around disclosing the use of AI in political advertisements, it is also unclear whether this ruling will be passed before the election as the public comment period closed last week and the September meeting does not list the rule as an explicit discussion topic.
The Impact.
Since it is unlikely that Congress or a government agency will rule on how AI should be responsibly used ahead of the upcoming elections, people across the US should remain vigilant when consuming any news regarding the elections. While it is unlikely that every advertisement will utilize AI or utilize AI maliciously or deceptively, people should still question advertisement claims and conduct research into topics to make informed decisions at both the local, state, and federal election levels come November.
Additionally, for political campaigners, people should understand that while AI is presently allowed to be used in advertisements and that they do not have to disclose this use now, there are still limitations on what can be said or depicted. Advertisement makers should be aware that their advertisements still fall under the purview of fraudulent misrepresentation and should vet any of their AI-generated content to ensure that it complies with relevant laws and regulations before distributing it to the general public.
Highlighting Key Conversations.
In this week’s Caveat Podcast, our team sits down with Jen Roberts, the Assistant Director of the Cyber Statecraft Initiative, and Nitansha Bansal, the Assistant Director of the Cyber Statecraft Initiative from the Atlantic Council to discuss their recent report: “Mythical Beasts and Where to Find Them: Mapping the Global Spyware Market and its Threats to National Security and Human Rights.” During this conversation, we unpack the various links between the 435 entities in the global spyware market and the impacts this technology has on our everyday lives.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other Noteworthy Stories.
Bipartisan lawmakers unveil new measures that would require social media mental health warnings.
What: Senators Katie Britt and John Fetterman have introduced a new bill that would require social media platforms to have mental health warning labels.
Why: On Tuesday, Senators Britt and Fetterman introduced their bipartisan legislation, called the Stop the Scroll Act, that would inform users of the mental health risks associated with social media applications as well as provide access to mental health resources. If passed and signed into law, the bill would require the Surgeon General and Federal Trade Commission to create pop-ups that would warn users of the potential mental health risks every time a user logged on.
With this introduction, Senator Britt stated: “with the Stop the Scroll Act, Senator Fetterman and I are following through on the Surgeon General’s call to create a warning label for social media platforms, but we’re going further by requiring the warning label to also point users to mental health resources.”
Google Cloud files complaint with European Commission.
What: Google Cloud filed a complaint against Microsoft's licensing practices with the European Commission.
Why: On Wednesday, Google Cloud filed a complaint with the European Commission alleging that Microsoft has locked customers into Teams through licensing agreements and is planning to do the same practice with its Azure cloud solution. Google alleges that these licensing terms restrict European customers from being able to move their existing Microsoft workloads to various competitors unless they want to face a 400% price markup from Microsoft.
Microsoft responded to this complaint by stating that “having failed to persuade European companies, we expect Google similarly will fail to persuade the European Commission.” Microsoft additionally stated that they had resolved similar concerns levied by other European cloud providers earlier this year after companies withdrew their complaints in exchange for Microsoft agreeing to make its software solutions integrate more seamlessly with other platforms.