Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,900 words, this briefing is about a 9-minute read.
At a Glance.
- The Senate officially passes both KOSA and COPPA 2.0.
- The Biden administration announces new AI actions and voluntary commitments.
Senate passes both KOSA and COPPA 2.0 with overwhelming bipartisan support.
The News.
On Tuesday, the United States (US) Senate passed a legislative package with overwhelming support aimed at improving children’s safety online. In the legislative package was both the Kids Online Safety Act (KOSA) and the updated version of the Children’s and Teens Online Privacy Protection Act (COPPA 2.0), and the package was passed with a 91-3 vote. Now that the Senate has passed these bills, the legislative package moves to the House of Representatives for another vote. After passing the package, Senator Chuck Schumer stated, "these bills have real bipartisan momentum, so we should seize the opportunity to send them to the president’s desk.” At this moment, the House has not set a date to vote on the package, and the House is currently on recess until September.
The senators who voted no on the package were Senators Ron Wyden, Rand Paul, and Mike Lee. With Senator Wyden’s “no” vote, he stated that he “[fears] this bill could be used to sue services that offer privacy-enhancing technologies like encryption or anonymity features that are essential to young people’s ability to communicate securely and privately without being spied on by predators online.”
The Knowledge.
With the Senate passing this legislative package, both KOSA and COPPA 2.0 have made significant legislative progress. For context, KOSA is a bill introduced by Senators Richard Blumenthal and Marsha Blackburn and was created to require stronger safeguards and tools to be put into place to protect minors online. This bill was developed after a series of hearings discussing the dangers of social media for minors were held and gained notable public interest. If passed, KOSA would require specific platforms to have the following:
- A duty of care to prevent and mitigate enumerated harms.
- Have default safeguards for known minors that would limit others' ability to communicate with them, control personalization systems, and limit features designed to extend the use of the platform.
- Must provide “easy-to-use” settings for parents to better manage a minor’s use of a platform.
- Must provide specified notices and obtain verifiable parental consent for children under thirteen to register for the service.
The other major bill, COPPA 2.0, was introduced by Senators Ed Markey and Bill Cassidy. This bill was designed to replace the original COPPA Act from 1998 and extend its protections to minors under the age of seventeen rather than only under the age of thirteen. If signed into law, COPPA 2.0 would introduce the following:
- Restrict the collection, use, and disclosure of personal information for users under seventeen.
- Limits the circumstances where a minor’s personal information can be processed.
- Creates new rights for minors and their parents such as the right to request that collected information be deleted.
- Expand the original COPPA’s application to platforms with “knowledge fairly implied on the basis of objective circumstances” that a given user is a minor.
While several states have introduced various laws to address these issues, such as in Utah and Florida, the federal government has yet to pass any comprehensive legislation that would address minors’ online safety. If this package passes the House and is signed into law by the President, it would mark a monumental milestone in managing social media sites. For several years now, pressures have been mounting on both the state and federal levels to introduce stronger measures to protect minors and reign in social media platforms.
The Impact.
If this legislative package were passed, it would mark a significant change in how minors are handled on social media platforms. For social media sites, both of these bills would require companies to make some notable changes from both a technical perspective and a cultural perspective regarding how minors are handled. Technically speaking, these bills would require companies to implement better controls for parents to use and remove features designed to increase user engagement among others. From a cultural perspective, these bills would also require social media companies to change how they approach minors on their platform. Rather than seeing minors as standard users, platforms would need to treat minors on their sites with significantly more care regarding both content exposure and data collection or risk facing significant penalties from both regulators and parents alike.
For parents, these bills would mark a notable change in their rights when it comes to managing their children’s online presence. While these bills still need to be approved by both the House and the President, parents should take the time to examine the details of these bills and the powers it gives them.
Biden administration announces new AI actions.
The News.
On Friday, the Biden administration released an update regarding the administration's previous AI Executive Order. With this announcement, the administration detailed that Apple has now joined fifteen other AI developers in signing the administration’s voluntary commitments. With Apple’s commitment, the company now joins other prominent AI developers, like Meta, Microsoft, Amazon, Google, and OpenAI in their pursuit of responsible AI innovation.
Additionally, the administration has also announced that federal agencies have completed all of the actions outlined in the original Executive Order within the 270-day time frame. These actions involved agencies taking “sweeping action to address AI’s safety and security risks, including by releasing vital safety guidance and building capacity to test and evaluate AI.” With this announcement, the administration also relieved that federal agencies have also progressed to working on other AI directives scheduled for longer timeframes.
The Knowledge.
With this newest commitment, the Biden administration has now added another major technology company to support its AI guidelines. For context, these guidelines were issued in an Executive Order in October 2023 and were established to create new standards for AI that would protect consumers, prioritize privacy, improve security, and promote innovation. At the time, President Biden stated with this executive order that “AI and the companies that wield its possibilities are going to transform the lives of people around the world…but first, they must earn our trust.”
With agencies having completed their 270-day assignments, the agencies performed actions involving:
- Published final frameworks on managing generative AI risks and securely developing generative AI systems.
- Reported results of piloting AI to protect vital government software.
- Released a guide for designing safe, secure, and trustworthy AI tools for education uses.
- Published guidance on evaluating the eligibility of patent claims involving inventions related to AI technology.
- Issued a comprehensive plan for US engagement on global AI standards.
While federal agencies have done more actions than the ones mentioned above, these actions represent a strong push by the Biden administration to address AI and ensure that its benefits are harnessed and its liabilities minimized. As the administration continues to work with industry leaders, pressure will continue to mount on federal authorities, especially Congress, to pass legislation that directly addresses AI and its impacts on society.
The Impact.
As the Biden administration continues its efforts to better address and manage AI, these efforts to work with industry leaders and spur federal action are reflective of mounting pressure for comprehensive AI legislation. While numerous AI bills have been proposed within both the House and the Senate, no bill has been able to make it to the President’s desk so far. Until a bill is presented, the Biden administration as well as the next one will likely continue to address AI through Executive Orders, agency actions, and voluntary commitments.
For AI users, people should be aware of the Biden administration’s growing interest in the technology and how this interest will likely result in comprehensive legislation being passed by an administration sometime over the next few months or years depending on how the upcoming elections unfold. For now, users should be aware that AI is still an emerging technology that is constantly evolving and still poses numerous challenges and concerns regarding both privacy and safety. Users should continue to use AI securely to prevent any unnecessary breaches or other incidents.
Other Noteworthy Stories.
FCC advances proposal requiring political advertisers to disclose AI use.
What: The Federal Communications Commission (FCC) moved its proposal forward, requiring political advertisement makers to disclose any use of AI.
Why: Last Thursday, the FCC moved its AI regulatory proposal forward amidst concerns over the technology's ability to spread misinformation ahead of the 2024 elections. With this announcement, Jessica Rosenworcel, the FCC Chair, stated that with this move “the FCC [has taken] a major step to guard against AI being used by bad actors to spread chaos and confusion in our elections.” Currently, the proposal would require political advertisement creators to disclose the use of AI in both candidate and issue advertisements; however, this disclosure would not be required for any online or streaming advertisements.
Now that the proposal has officially passed with a 3-2 vote it will undergo a forty-five-day period for public comment followed by a fifteen-day period for the FCC to reply. After these periods, the commissioners will then vote to finalize the rule.
Meta to pay $1.4 billion to settle Texas facial recognition data lawsuit.
What: Meta has agreed to pay Texas $1.4 billion to resolve a lawsuit that accused the company of illegally using its facial recognition technology.
Why: On Tuesday, Meta agreed to settle with Texas after the state alleged the company was using its facial recognition software to collect the biometric data of Texan citizens without consent. The lawsuit was originally filed in 2022 claiming that the company violated the 2009 Texas biometric privacy law. In this lawsuit, Texas claimed that Meta was capturing biometric information from uploaded photos and videos “billions of times.”
With this settlement, a Meta spokesperson stated that the company was pleased to resolve the case and is looking forward to “exploring future opportunities to deepen our business investments in Texas, including potentially developing data centers,” with the company denying any wrongdoing. Texas Attorney General, Ken Paxton, stated that the settlement demonstrates the state’s “commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights.”
Justice Department says TikTok collected user data on social issues.
What: In a Justice Department brief, the department accused TikTok, and its parent company, ByteDance, of using a system that allowed its employees to communicate sensitive user information to ByteDance engineers in China.
Why: On Friday, the Justice Department filed a brief claiming that ByteDance and TikTok used a system, called Lark, to send sensitive user information to engineers in China. The filing alleges that Lark allowed the companies’ employees to collect information on a user’s content and social opinions. In this brief, the Department wrote that “by directing ByteDance or TikTok to covertly manipulate that algorithm, China could for example further its existing malign influence operations and amplify its efforts to undermine trust in our democracy and exacerbate social divisions.” Additionally, the brief also alleged that TikTok has used this tool to suppress content based on specific trigger words.
Aside from this brief, federal authorities have also drafted a classified version of the briefing that they have requested to not be shared with either company.