Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,750 words, this briefing is about an 8-minute read.
At a Glance.
- AI bill aims to give journalists and artists more control over their content.
- FTC requests information about Amazon and Adept dealings
Bipartisan AI bill would give journalists and artists control over content.
The News.
The new Content Origin Protection and Integrity from Edited and Deepfaked Media Act, or COPIED Act, is gaining traction in Congress as it looks to better protect content creators from AI misuse. If passed, this new bipartisan bill would require platforms that develop or share artificial intelligence (AI) systems to allow creators to attach content provenance information to their work for two years. This “content provenance” is defined in the bill as machine-readable information that documents the origin and history of a piece of digital content. If this bill were to pass, it would prevent companies from using work with these labels to train their AI systems or create content without explicit consent. Aside from implementing the content provenance provision, the bill would also require the National Institute of Standards and Technology to create new guidelines and standards for content provenance information, watermarking, and synthetic content detection.
This bill was introduced into the Senate by Senators Maria Cantwell, Marsha Blackburn, and Martin Heinrich. As she introduced this bill, Senator Cantwell stated that these measures would introduce “much-needed transparency around AI-generated content.” Senator Cantwell continued stating that “the COPIED Act will also put creators, including local journalists, artists and musicians, back in control of their content with a provenance and watermark process.”
The Knowledge.
While this piece of proposed legislation has only just been introduced into the Senate, this bill comes after pressure has continued to mount regarding how companies have used AI. For some time, content creators have expressed their concerns surrounding the unregulated practice of using their content to train AI or create similar content. SAG-AFTRA, a major actors’ union, commented on these concerns stating how “the capacity of AI to produce stunningly accurate digital representations of performers poses a real and present threat to the economic and reputational well-being…of our members.” Aside from SAG-AFTRA supporting this bill, other prominent content creator groups such as the Recording Academy, the News/Media Alliance, and the National Newspaper Association among others also expressed their support for this bill.
However, despite its bipartisan and industry-wide support, this latest bill comes after several other ineffective attempts to address these concerns. Previously, in October 2023, a similar bill, known as the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act was introduced with similar bipartisan support. Despite this previous bill’s support, the NO FAKES Act has still yet to be voted on, and its last update occurred on April 30th, 2024, when the Senate Judiciary Committee discussed the act during a hearing.
The Impact.
While this latest bill has only just been introduced into the Senate, this act is representative of the growing pressures on Congress to regulate how AI developers use others’ content. If this bill were passed, AI developers would need to adjust how they train their AI programs and create AI content. Rather than using others’ content freely, this new bill would force developers to gain explicit consent, which would likely require financial compensation. If developers neglected to respect these content provenances, it would enable victims to sue violators.
For AI developers, people involved in this industry should track this legislation and be prepared to adjust how they use others’ content going forward if this bill were to pass. While this bill may not pass Congress and be signed into law, developers should be aware that pressure will continue to mount on Congress to address this issue, and a bill similar to the COPIED Act will likely be passed sometime over the coming months or years. For content creators, this bill would represent a large step forward in protecting content creators from AI misuse. Content creators should be prepared to implement these content provenances to protect their creator rights if passed.
FTC seeking details on Amazon AI startup deal with Adept.
The News.
On Tuesday, reports emerged that the Federal Trade Commission (FTC) has asked Amazon to provide more details regarding its business dealings with Adept. For context, this deal involved Amazon hiring several top executives and researchers from the AI startup firm. This inquiry comes after last month, Adept’s Chief Executive, David Luan, and several others announced their departure from the company to join Amazon, who was also partnering with the startup to license some of Adept’s technologies. Amazon’s ties to Adept mark the second notable investment the major technology firm has made into the AI industry. Previously, Amazon had invested $4 billion into the AI startup, Anthropic, since last September and now owns a minority stake in the company.
At this time, this is only an informal inquiry. Meaning, this inquiry will not necessarily result in an official investigation or any enforcement action unless the FTC deems it necessary.
The Knowledge.
With this move, the government is continuing its effort to investigate the various business dealings between large technology companies and various AI development companies. Notably, throughout 2024, various US antitrust enforcers have opened numerous inquiries and launched several investigations regarding these relationships. For example, in January, the FTC announced that it had issued five “compulsory orders” to Amazon, Google, Microsoft, OpenAI, and Anthropic requiring them to provide more information about their various business agreements and decision-making processes. This pattern continued in June when both the FTC and the Department of Justice launched antitrust investigations into Microsoft, OpenAI, and Nvidia. While these investigations are still ongoing, Microsoft gave up its board advising seat one month later at OpenAI, a move that many have believed was made due to the growing scrutiny from regulatory agencies.
While it is unclear whether this most recent inquiry will result in an investigation being started, it should not be a surprise if the FTC decides to launch a new investigation into these dealings if they find concerning behaviors. Each of these various agency actions is reflective of the Biden administration’s push to better scrutinize large technology companies’ business practices as concerns have continued to grow surrounding monopolistic behaviors and how the race for AI production could undermine fair competition, introduce safety risks, and limit innovation.
The Impact.
At this time, it is unclear whether the government will start an investigation regarding Amazon’s dealings with Adept. Regardless of whether an investigation is started or not, the government’s interest in these dealings is likely to continue growing as AI becomes more entrenched in everyday society. For companies involved in developing, funding, or distributing AI products, businesses should plan accordingly to ensure they comply with any potential regulations that could be implemented.
For consumers who utilize AI products, these investigations and rulings will likely have little noticeable impact on how they utilize AI products, However, in theory, these investigations and potential regulations will result in a more innovative and secure AI ecosystem for consumers.
Other Noteworthy Stories.
OpenAI whistleblowers ask SEC to investigate alleged restrictive NDAs.
What: Whistleblowers filed a complaint regarding OpenAI’s use of restrictive non-disclosure agreements (NDAs).
Why: Over the weekend, several OpenAI whistleblowers filed a complaint with the US Securities and Exchange Commission (SEC) requesting the agency investigate OpenAI’s NDAs. In their filed letter, these whistleblowers stated that “given the well-documented potential risks posed by irresponsible deployment of AI, we urge the Commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts…to ensure full compliance with SEC rules.” According to these whistleblowers, OpenAI allegedly issued highly restrictive employment agreements, severance agreements, and NDAs to its employees, which could have resulted in penalties for an employee who raised concerns with federal authorities. In addition to these allegations, these whistleblowers also requested that the SEC require OpenAI to produce every contract that contained an NDA for federal inspection.
AT&T says data breach exposed nearly all of its US customers.
What: AT&T announced that the company suffered a massive breach involving millions of US customer accounts.
Why: On Friday, AT&T announced that a breach in April resulted in over 109 million US customer accounts having text and call records from 2022 being illegally downloaded. According to AT&T, the compromised data included records of calls and texts for both its cellular and landline customers between May and October 2022. However, AT&T did state that the data did not contain the content of the calls or texts nor did the compromised data contain any personal information.
At this time, federal authorities did not identify any suspects. Still, they did announce that AT&T, the Justice Department, and the Federal Bureau of Investigation are working together to investigate the incident and assist with managing the response.
Federal court halts reimposed net neutrality rules.
What: The US Court of Appeals for the Sixth Circuit has put in a temporary stay on the recently reimposed net neutrality rules.
Why: On Monday, a US Court of Appeals instituted a temporary stay until August 5th for the recently restored net neutrality rules. This temporary stay was implemented after several broadband providers filed a motion in favor of a stay. For context, net neutrality rules bar broadband providers from throttling or blocking internet traffic to websites that do not pay additional fees. While originally introduced under the Obama administration, these rules were then rescinded under the Trump administration, and have only just been reinstated last April by the Biden administration. Support for these rules is split along partisan lines with Democrats supporting the measures and Republicans opposing them.
FCC considering a rule that would require AI disclosures on robocalls.
What: Jessica Rosenworcel, the Federal Communications Commission’s (FCC) chair, proposed a rule that, if passed, would require callers to disclose the use of AI on robocalls.
Why: On Tuesday, Jessica Rosenworcel proposed this new AI robocall disclosure rule that would require callers to disclose their use of AI-generated calls. Additionally, this new rule would seek to better define what AI-generated calls are, so the FCC could institute better guardrails around the use of the technology. With this announcement, Rosenworcel stated “bad actors are already using AI technology in robocalls to mislead consumers and misinform the public” and that these rules would “empower consumers to avoid this junk and make informed decisions.” This recent proposal will be considered by the full commission during their next August meeting.