Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,500 words, this briefing is about a 6-minute read.
At a Glance.
- France accuses Russia of cyberattacks.
- Congress passes the first major AI bill.
France names Russia as a primary cyber threat.
The news.
On Tuesday, France’s foreign ministry accused Russia's military intelligence agency of orchestrating numerous cyber attacks over the past five years. More specifically, France accused Russian advanced persistent threat (APT) 28 of attacking dozens of French targets, including ministries, defense firms, and think tanks. According to the National Cybersecurity Agency (ANSSI), the last attack occurred in December and 4,000 other attacks were launched by Russian-affiliated groups throughout 2024.
With this report, France's foreign minister stated “these destabilizing activities are unacceptable and unworthy of a permanent member of the United Nations’s Security Council.” The minister continued, stating, “alongside its partners, France is determined to use all means at its disposal to anticipate, deter, and respond to Russia's behavior in cyberspace.”
Notably, France has accused Russia of similar attacks before, but this is the first time that the nation has made these accusations based on its gathered intelligence.
The knowledge.
In recent years, APT 28 has been the subject of international criticism and scrutiny. Outside of France's latest accusations, Germany also publicly named the hacking group last year. In that instance, Germany alleged that the group was behind a series of cyberattacks that targeted both Germany's ruling party and aerospace firms. However, APT 28's alleged attacks are not unique.
While allegations against Russian cyber actors are not new, the volume and intensity of these alleged attacks appear to be rising. Since Russia invaded Ukraine, cyberspace has become an increasingly targeted domain. Last week, the Dutch military intelligence agency, MIVD, released a report which echoed France’s findings. MIVD’s director, Peter Reesink, commented that “we see the Russian threat against Europe is increasing.” Reesink commented that MIVD noticed “the first Russian cyber sabotage act against a public service, with the aim of gaining control of the system.”
Alongside these attacks, MIVD noted how these efforts reflect a growing effort by Russia to launch hybrid attacks as well as sabotage key Western infrastructure, such as internet cables, water, and energy supplies.
The impact.
With each of these nation’s findings, these reports highlight a sustained, and likely escalating, cyber conflict between Russia and Western nations. The scale and persistence of these Russian-linked cyber operations are most likely not isolated incidents, but rather parts of a larger international strategy.
For organizations, especially those tied to government agencies, critical infrastructure, defense firms, and other similar businesses, this escalating dynamic poses significant risks and threats to ongoing operations. APT attacks are inherently difficult to predict, counter, and recover from, and are seemingly increasingly targeting these sectors.
While many people and organizations will be unable to fully counter nation-state threats, people should take time to address these concerns and mitigate their potential impacts where possible. By developing comprehensive incident response plans, hardening supply chains, and enhancing threat-hunting efforts, organizations can better insulate themselves and their operations.
Congress passes a major AI bill.
The news.
On Monday, the House of Representatives passed the Take It Down Act, a broadly supported bipartisan bill that aims to make it a federal crime to publish nonconsensual intimate imagery, or NCII, of any person. Outside of making the posting of NCII a federal crime, the bill would also mandate that online platforms remove any NCII within forty-eight hours after it has been reported.
For context, the bill was originally introduced by Senators Ted Cruz and Amy Klobuchar and unanimously passed the Senate in February. The bill now heads to President Trump, who is expected to sign the bill.
With the House’s passing of this bill, First Lady Melania Trump stated:
“Today’s bipartisan passage of the Take It Down Act is a powerful statement that we stand united in protecting the dignity, privacy, and safety of our children.”
Senator Klobuchar also commented, “these images can ruin lives and reputations, but now that our bipartisan legislation is becoming law, victims will be able to have this material removed from social media platforms and law enforcement can hold perpetrators accountable.”
The knowledge.
Once signed into law, the Take It Down Act would represent the first comprehensive effort passed at the federal level that addresses AI. To date, a significant portion of federal efforts to address AI have been driven by government agencies and Executive Orders (EO) as Congress has been unable to pass any comprehensive AI legislation. Outside of these executive efforts being highly subjective, they have also been routinely disrupted.
For example, under the former administration, former President Biden signed EO 14110, or the Safe, Secure, and Trustworthy Development and Use of AI order. This EO not only defined the former administration’s AI policy goals but also mandated agencies to create AI management roles and outline AI procurement policies. However, despite the EO being the most comprehensive federal AI effort at the time, President Trump rescinded the order signaling a clear shift in how his administration planned to address AI. These changing policies have made federal efforts inconsistent and less impactful.
Given the lack of federal leadership on AI, the responsibility of managing AI has largely been left to state governments. While there have been some notable successes, such as with Colorado’s AI Act and Utah’s AI Policy Act, these efforts inherently create a series of patchwork legislation creating inconsistent standards, requirements, and protections across the nation.
The impact.
The passage of the Take It Down Act could represent a potential turning point in federal AI regulation efforts. While the bill does not address some of the broader issues related to AI, such as algorithmic bias, privacy concerns, and automated decision-making, it does establish the first instance of federal efforts for regulating AI-created content.
Given the success and momentum behind this bill, this effort may help build momentum for future federal AI legislation. As conversations continue to grow surrounding the risks related to AI, this effort could spur federal lawmakers to address these issues through comprehensive legislation better.
For social media platforms, organizations should understand the compliance requirements related to this bill, especially when it comes to takedown expectations. For individuals, particularly victims of NCII, this law offers new tools to help provide greater accountability.
Highlighting key conversations.
In this week’s Caveat Podcast, our team revisits our previous policy deep dive conversation discussing AI. Throughout this conversation, our team assessed what efforts have been made at both the federal and state levels to address the emerging technology and how successful these efforts have been. Additionally, our team discusses how AI policy may change with the Trump administration and what issues will likely be at the forefront of President Trump’s AI policies.
Like what you read, and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other noteworthy stories.
Nigerian Tribunal upholds fine against Meta.
What: Nigeria’s Federal Competition and Consumer Protection Commission (FCCPC)’s $220 million fine was upheld.
Why: Last Thursday, Nigeria’s Competition and Consumer Protection Tribunal upheld the FCCPC’s fine from last July against Meta. For context, the FCCPC fined Meta regarding discriminatory and exploitative practices against Nigerian consumers. That fine was levied after a thirty-eight-month-long investigation, which claimed that Meta did not allow users the option or opportunity to self-determine or withhold consent regarding the gathering, use, and sharing of personal data.
Google hearing set for May 2nd.
What: A United States (US) Judge set a May 2nd hearing date to discuss remedies for Google’s antitrust violations regarding its online advertising technology.
Why: Last Thursday, US District Judge Leonie Brinkema set Google’s hearing date to address potential remedies. While this hearing’s goal is not to establish the remedies, the hearing does aim to get a broad sense of the potential remedies being pursued before focusing on the specific measures.
For context, this hearing comes after Judge Brinkema found Google liable on April 17th for “willfully acquiring and maintaining monopoly power” in markets for publisher ad servers and ad exchange services.
IBM investing $150 billion in US.
What: IBM announced a significant investment to grow US manufacturing capabilities.
Why: On Monday, IBM announced this investment, which will take place over the next five years. More specifically, the company said it would dedicate $30 billion to the manufacturing of mainframe and quantum computing.
When making this announcement, Arvind Krishna, IBM’s chairman, president, and CEO, stated that “we have been focused on American jobs and manufacturing since our founding 114 years ago, and with this investment and manufacturing commitment we are ensuring that IBM remains the epicenter of the world’s most advanced computing and AI capabilities.”