Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,850 words, this briefing is about a 9-minute read.
At a Glance.
- Cybercriminals are increasingly working with authoritarian regimes to target the US and its allies.
- New AI assessment tool released to evaluate EU AI Act compliance.
Microsoft report finds that cybercriminals are working with authoritarian regimes to target the US.
The News.
On Tuesday, Microsoft published its new Digital Defense Report for 2024 that detailed how authoritarian regimes, like China, Iran, and Russia, are increasingly relying upon criminal networks to assist in their cyberespionage and hacking operations. With this report, Microsoft highlighted several examples of these connections such as in an instance where Iran was using criminal groups to infiltrate an Israeli dating site to collect and then sell or ransom the personal data obtained. In another example, Microsoft linked Russia with a criminal network that had infiltrated over fifty devices being used by the Ukrainian military to gather information that could aid in Russia’s war efforts.
After publishing this report, Tom Burt, Microsoft’s vice president of customer security and trust, stated that “we’re seeing in each of these countries this trend towards combining nation-state and cybercriminal activities.” However, while Microsoft did note this increase in joint nation-state and cybercriminal activities, the company did highlight that there was no evidence that these nations are sharing resources or working with the same criminal networks.
The Knowledge.
With this latest report, Microsoft has highlighted the new methods that hostile nations and cyber criminals are exploring to attempt to compromise their various targets. Throughout this report, Microsoft highlighted several notable trends from 2023 to 2024. Aside from noticing greater cooperation between nations and cyber criminals, Microsoft also highlighted how these nations were centering their disruption efforts around active military conflicts, locations with regional tensions, and the upcoming United States (US) election. Regarding cyber criminals, Microsft noticed how these actors were dramatically increasing their efforts to operate scams as well as increase ransomware attacks. Below are some of the notable statistics covered in this report:
- Seventy-five percent of Russian targets were either in Ukraine or were NATO member states.
- Iran saw a significant forty percent increase in its activities when targeting Israel.
- There was a 2.75x increase in year-over-year ransomware attacks.
- Tech scams increased 400% since 2022.
Aside from these actors increasing their efforts to influence regional events and compromise targets, Microsoft also highlighted how hostile nations have begun to increasingly “drive discord on sensitive domestic issues leading up to the US election.” For months, news stories have emerged regarding how hostile actors have routinely targeted the US with influence campaigns in attempts to sow diversion and spread misinformation. Just last month, the Department of Justice seized thirty-two different websites that were imitating legitimate news sites to direct users to Russian-produced content that fomented division and supported Russian government narratives. While these sites have been shut down, this example is just one of many instances where hostile nations have attempted to sway voter views before the November election.
Lastly, Microsoft’s report emphasized how threat actors have begun to experiment with artificial intelligence (AI). Microsoft highlighted how specifically China and Russia have already begun to exploit AI-generated images and audio/video content to craft misleading content to aid in their influence operations. However, while Microsoft did highlight these concerns, the company did emphasize that these AI-supported influence operations have so far largely been ineffective. Additionally, Microsoft emphasized that while threat actors have begun to use AI so have defenders enabling these teams to respond faster and more effectively.
The Impact.
While this report does not provide any actionable recommendations on how to protect people from these emerging threats, this information is critical for security professionals to understand. Perhaps the most notable findings for the average person are the dramatic increases in both ransomware and tech scams. To counteract these rising threat patterns, businesses and people should invest in reliable security training to help users identify phishing attacks and scams more effectively as well as implement more robust security measures that will block these phishing attempts before they reach users.
Regarding nations spreading misinformation, US citizens should expect these influence operations to continue over the coming weeks as the election draws nearer. People should continue their efforts to verify all the information they consume to ensure its validity before making their opinions and sharing news. By exercising due diligence, people can ensure that they are making informed decisions when casting their vote rather than inadvertently supporting foreign efforts.
EU AI Act checker released to assess compliance pitfalls.
The News.
On Wednesday, a new tool was released that plans to help AI developers assess whether or not their AI models are compliant with the European Union (EU) AI Act. With this new tool, developers can now test various generative AI models to assess their compliance across a variety of categories related to the AI Act. While the tool was not developed or funded by the European Commission, a spokesperson from the Commission stated that “the Commission welcomes this study and AI model evaluation platforms as a first step in translating the EU AI Act into technical requirements.”
This resource was developed by the Swiss company LatticeFlow AI and its research partners. Following the EU AI Act’s compliance framework, the tool will assess an AI model across dozens of compliance categories and will give a model a score between zero and one, with a score of one meaning the AI model is compliant. Already the company has published several assessments it has conducted on top AI models and published their scores. The following major AI models score the following:
- Anthropic’s Claude 3 Opus: .89 Aggregate Score
- OpenAI’s GPT-4 Turbo: .89 Aggregate Score
- OpenAI’s GPT-3.5 Turbo: .81 Aggregate Score
- Meta’s Llama 2 70b Chat: .78 Aggregate Score
- Google’s Gemma-2-9B: .72 Aggregate Score
With this tool, LatticeFlow AI’s CEO, Peter Tsankov, emphasized that these test results were overall positive and would offer AI developers a roadmap to help ensure their models are compliant with the AI Act. Additionally, Tsankov stated that “with a greater focus on optimizing for compliance, we believe model providers can be well-prepared to meet regulatory requirements.”
The Knowledge.
While this resource should not be seen as a catch-all compliance assessment tool for AI models, these tests do offer developers an opportunity to assess whether or not their models are falling short of the AI Act’s requirements and help identify improvement areas. Additionally, as the EU continues to further define its compliance requirements early next year through its “Code of Practice” it is likely that this tool will continue to be a strong aid for developers to assess individual specific pitfalls over the coming months.
For greater context, the Code of Practice was a core document outlined by the EU AI Act that will function as a non-legally binding document that businesses can use as a checklist to assess how compliant their models are. More specifically, the Code of Practice aims “to facilitate the proper application of the AI Act’s rules for general-purpose AI models, including transparency and copyright-related rules, systemic risk taxonomy, risk assessment, and mitigation measures.” While the final version of the Code of Practice will not be released until April 2025, the Commission has already begun drafting the document as earlier this month, it brought AI experts together to begin outlining and creating the document.
The Impact.
Since this tool has not been officially created by the European Commission, this resource should not be the only method used by developers to assess if their models are legally compliant. Rather, this tool should be seen and used as a benchmark assessment tool by developers to help validate their compliance and assist in identifying any shortfalls especially once the EU releases its “Code of Practice” in early 2025.
For AI developers, this tool is reflective of the growing pressures emerging across the EU to ensure that major AI models are compliant with the AI Act. While the AI Act will not come into full compliance for some time, developers should be aware of these requirements and ensure their models are compliant unless they are willing to risk being annually fined thirty-five million euros, or seven percent of their global annual turnover.
For everyday AI users, people should take time to examine these assessment results to help determine which AI models are best suited for their needs. While compliance does not ensure security, these assessments can be useful resources to help compare various models and determine which AI platform is best suited for daily needs.
Highlighting Key Conversations.
In this week’s Caveat Podcast, our team sat down for a conversation with Katie Bowen, the vice president and general manager of Global Public Sector and Defense at Synack, Inc. During this conversation, our team got insights into her thoughts on CISA’s new guidance on the Federal Civilian Executive Branch (FCEB) Operational Cybersecurity Alignment (FOCAL) Plan and the federal vulnerability management practices. Our team also did a deep dive into one of the biggest misconceptions surrounding the First Amendment.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other Noteworthy Stories.
Biden administration to provide $750 million to North Carolina-based firm for advanced computer chips.
What: The Biden administration has announced that it will provide Wolfspeed with up to $750 million in direct funding to support the creation of advanced computer chips.
Why: On Tuesday, the Biden administration announced its latest funding initiative through the CHIPS and Science Act. Aside from the $750 million grant, a group of investment funds led by Apollot, the Baupost Group, Fidelity Management & Research Company, and Capital Group plan to match this grant. Additionally, Wolfspeed expects to also receive a one billion dollar tax credit for advanced manufacturing.
With this announcement, Commerce Secretary Gina Raimondo stated that “artificial intelligence, electric vehicles, and clean energy are all technologies that will define the 21st century, and thanks to proposed investments in companies like Wolfspeed, the Biden-Harris administration is taking a meaningful step towards reigniting US manufacturing of the chips that underpin these important technologies.”
Google requests US judge’s app store ruling be put on hold.
What: Google has requested that a California federal judge pause his sweeping court order that would require it to open its Play Store to greater competition.
Why: Last week, Google submitted a court filing that, if accepted, would pause a judge’s ruling, which if unaddressed, would go into effect on November 1st. This filing would pause the injunction order’s effect that Google has argued would introduce “serious safety, security, and privacy risks into the Android ecosystem.”
For greater context, this injunction order emerged after Epic Games filed a lawsuit against Google alleging that the company was illegally monopolizing how consumers were able to download applications on Android devices and implement in-app microtransactions.