Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,750 words, this briefing is about a 7-minute read.
At a Glance.
- European Commission opens investigation into TikTok.
- US House task force releases end-of-year report on AI.
EU opens TikTok election interference investigation.
The News.
On Tuesday, the European Commission initiated formal proceedings against the social media company, TikTok, for alleged election interference in the recent Romain Presidential elections. With this move, the Commission stated that it intends to request information from the company and will look into TikTok’s policy regarding political advertising and paid-for political content. Additionally, the Commission also intends to investigate how the company’s systems generate user recommendations and if these recommendations are at risk of being potentially manipulated.
With the Commission having started these formal proceedings, these efforts now empower them to take further enforcement actions, if deemed necessary, and will enable them to accept remediation commitments made by TikTok as they arise. With this announcement, Commission President Ursula von der Leyen stated that “we must protect our democracies from any kind of foreign interference.” Von der Leyen continued emphasizing that “whenever we suspect such interference, especially during elections, we have to act swiftly and firmly.”
ByteDance, TikTok’s parent company, responded to this investigation by emphasizing that the company has worked to protect its integrity throughout dozens of elections. Additionally, Bytedance emphasized that TikTok did not accept any paid political advertisements and took active steps to remove any content violating its terms related to misinformation and hate speech.
The Knowledge.
The European Commission’s move comes after the organization ordered TikTok to freeze data related to the Romanian Presidential elections on December 5th. For context, this event involved a highly controversial Romanian election that was annulled a day after this freeze was ordered. During this event, Romania’s constitutional court annulled the results of the first round of voting after Calin Georgescu won. The court ordered this annulment after a series of declassified intelligence documents were released, which implied that Gerogescu benefitted from a substantial Russian influence operation that interfered with the results of the vote. However, while this investigation does not allege that TikTok intentionally aided with this influence operation, this marks the third investigation that the Commission has launched against TikTok with the other two revolving around TikTok’s risks to minors.
Aside from the growing pressure in Europe, ByteDance is also facing increased pressure within the United States (US). Just last week, a federal court dismissed the company’s case to overturn the “TikTok ban” bill. In response to this result, TikTok filed an emergency appeal with the US Supreme Court to block the ban. In this effort, the company wrote a letter to the Court’s Justices urging them to take action before the law’s January 19th deadline hits. While it is unclear how the Supreme Court will respond to this appeal, the social media application is now facing immense pressure across the world as regulators and courts have routinely expressed concerns with the application’s ties to China, how it collects data, and how the application can influence users.
The Impact.
With this latest European investigation, TikTok has again found itself in the crosshairs facing pressures from multiple governmental bodies. While these growing pressures have not resulted in any significant penalties so far, the US’s “TikTok ban” law does go into effect on January 19th and could have significant implications worldwide. Additionally, as the European Commission continues its investigation into the Romanian election, its findings could also be damaging to the company as it could face greater scrutiny, regulations, and financial penalties as a result in Europe.
Regardless of citizen opinions, it is clear that governments have continued to grow increasingly concerned about TikTok and its alleged harmful impacts. People and businesses that utilize TikTok should be prepared to operate if the application were to face significant downtimes as a result of the US’s law as well as be aware that changes may be made to the application if required by regulators.
House task force releases end-of-year report on AI.
The News.
On Tuesday, the US House of Representatives Task Force on Artificial Intelligence (AI) released its end-of-year report. This report revolved around assessing how the US can best utilize AI in regard to social, economic, and health settings while also emphasizing the risks that the technology poses. When creating this report, the twenty-four congressional members spoke to over one hundred technical experts, government officials, academics, legal scholars, and business leaders to create their policy recommendations. Additionally, the report aims to act as a blueprint for future legislation and other legislative actions.
One of the key aspects emphasized throughout this report revolved around AI misuse and the risks associated with the technology. In the report, the Congressional members stated that the “adverse effects from flawed or misused technologies are not new developments but are consequential considerations in designing and using AI systems…[which could] deprive Americans of constitutional rights.”
With this report, task force co-chairs Jay Obernolte and Ted Lieu wrote that “this report highlights America’s leadership in its approach to responsible AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.”
The Knowledge.
With this document, Congress has delivered a comprehensive report on where improvements can be made related to AI and what risks need to be addressed. While this massive report covers many topic areas, one of the most important areas addressed revolved around intellectual property (IP) issues. Since AI’s massive proliferation throughout society, IP issues have been a constant point of tension between AI companies and content producers. For example, in April this year, the New York Times filed a lawsuit against both OpenAI and Microsoft alleging that their AI platforms infringed on their copyrights. The New York Times is not the only news company suing AI developers for copyright infringement. The New York Post, the Wall Street Journal, and many others have also filed similar lawsuits. When addressing this issue, the report highlighted how legislation needs to be put forward that further clarifies IP laws, regulations, and agency activity as well as the need to counter the harm of AI deepfakes better.
Another key discussion revolved around the proliferation of more open-source AI models in the market. In the section, the Congress members wrote that while the open models are encouraging both innovation and competition, these models represent a significant risk area. Additionally, lawmakers also urged their fellow Congress members to focus on these demonstrable harms and determine how they could impact key technologies sectors related to chemical, biological, radiological, and nuclear sectors.
Aside from these two major topic areas, the report also focused on addressing AI’s impacts on other policy sectors such as:
- Energy Usage and Data Centers
- Content Authenticity
- Education & Workforce
- Data Privacy
- National Security
The Impact.
While this report is not legislation itself, this substantial document can be seen as a clear roadmap for 2025’s AI legislation goals. Since no comprehensive AI legislation has been able to be passed at either the State or Federal levels, it is clear that passing some form of comprehensive AI legislation will be a major goal for the incoming Congress and the second Trump administration.
Stakeholders involved in AI development as well as businesses that utilize AI systems should take the time to dive into this report and understand the many areas and recommendations proposed in this document. By understanding what key areas will be focused on in 2025, stakeholders can prepare themselves for how potential legislation could impact their operations and plan accordingly.
Highlighting Key Conversations.
In this week’s Caveat Podcast, our team sat down with Casey Bleeker, the CEO of SurePath AI, to discuss the state of AI regulation and how this landscape could change in 2025 within the US. Additionally, our team also sat down to examine and talk about how the Biden administration is taking steps to retaliate against China over its massive telecommunications hack. Lastly, our team also discussed a story that highlighted a lawsuit against Character.ai which was filed by a mother after the company’s AI software recommended her child murder his parents alleging that the chatbot poisoned the son against his family.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other Noteworthy Stories.
EU privacy regulator fines Meta 251 million euros.
What: The lead European Union (EU) privacy regulator has fined Meta 251 million Euros.
Why: On Tuesday, Ireland’s Data Protection Commission (DPC) fined Meta in relation to a 2018 Facebook breach that impacted over twenty-nine million users. For context, due to a vulnerability in the site’s “View as” feature, a breach resulted in user personal data being leaked. With this fine, DPC Deputy Commissioner Graham Doyle stated that “by allowing unauthorized exposure of profile information, the vulnerabilities behind this breach caused a grave risk of misuse of these types of data.”
A Meta spokesperson did release a statement emphasizing that “we took immediate action to fix the problem as soon as it was identified, and we proactively informed people impacted as was as the Irish Data Protection Commission.” Meta also announced that they intended to appeal this decision.
Meta urges California Attorney General to stop OpenAI’s for-profit proposal.
What: Meta has announced its opposition to OpenAI being permitted to become a for-profit company and has requested the state’s attorney general to stop OpenAI’s effort.
Why: Last week, Meta submitted a letter to California Attorney General, Rob Bonta, arguing that if OpenAI was allowed to become a for-profit company it would set a concerning precedent for other startups.
The letter reads: "OpenAI’s conduct could have seismic implications for Silicon Valley. If OpenAI’s new business model is valid, non-profit investors would get the same for-profit upside as those who invest the conventional way in for-profit companies while also benefiting from tax write-offs bestowed by the government."
Additionally, in the letter, Meta announced its support of Elon Musk’s opinion to allow the public to decide whether OpenAI should be allowed to become a for-profit company.