Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,800 words, this briefing is about an 8-minute read.
At a Glance.
- UK pivots on its AI strategy by significantly cutting funding investments.
- Research finds that China is growing its social media influence operations ahead of the US election.
UK reshapes its AI strategy amid pressures to cut costs.
The News.
Last week, Britain’s newly elected Labour government announced that it will be creating a new artificial intelligence (AI) strategy that aims to significantly cut costs from the previous government’s plan. Apart from cutting costs, this new strategy will also aim to prioritize public sector adoption rather than emphasizing direct investment. The revised strategy is expected to be announced sometime in September before the government releases its Autumn Statement in October.
This new strategy comes after Prime Minister Keir Starmer’s government has reassessed the United Kingdom’s (UK) existing AI and technology strategies. Already Starmer’s government has cut over 1.3 billion pounds of planned funding related to technology-related investments. This funding cut included an 800 million pound investment that was meant for the University of Edinburgh to develop a supercomputer.
While it is not uncommon for a new administration to pivot from previously established policies, concerns have been raised that these funding cuts are an overreaction and an unforced error. In response to these concerns, a government spokesperson stated that Starmer’s administration recognizes the transformative power of AI and is committed to harnessing the technology for people across the UK.
The Knowledge.
While Britain's pivot in AI strategy may come as a surprise to some given how many governments have been investing in AI development, Starmer’s government has signaled its intentions to cut back on these investments since taking office in July. Aside from cutting the planned 1.3 billion pounds previously mentioned, Starmer’s government has also announced that it is considering other moves, such as scrapping a planned international office for the UK’s AI Safety Institute. For context, this office was going to be established in San Francisco and was set to open this summer. Along with potentially shutting down this planned office, Starmer’s government also removed key leadership at the Institute by letting go of one of the Institute’s co-founders, Nitarshan Rajkumar, from his role as a senior policy advisor.
With Starmer’s government adjusting Britain’s approach to managing AI investment, this change in domestic leadership marks a notable change in international AI leadership. Just last year, Britain hosted the world’s first AI Safety Summit and now the country has signaled its intentions to let other nations take the lead when it comes to developing, implementing, and securing AI. While it is likely that the UK will continue to invest in AI and be a part of global summits discussing the technology, it is unlikely that the UK will continue to lead these conversations as it did with the previous administration given the actions already taken.
The Impact.
With Starmer’s administration signaling a new direction for how the nation will approach AI, both people and organizations in the UK should expect September’s new strategy outline to expand upon the moves already taken. While these large funding cuts are unlikely to be felt by average citizens, these cuts are likely going to significantly impact organizations tied to the technology industry.
While it is unclear what September’s strategy announcement will fully entail, people should expect more funding cuts to be announced as well as a higher bar to secure future funding to be set. However, as the UK continues to take steps back in AI investment and leadership, other European nations will take their place as seen in France which has already committed 2.5 billion euros for domestic investment in the emerging technology.
Chinese social media operation found pushing divisiveness ahead of US election.
The News.
On Tuesday, Graphika, a social media analytics firm, published new research that found that a Chinese state-linked influence operation was being used to push divisive online narratives ahead of the upcoming 2024 election. From their research, Graphika concluded that this influence operation was becoming more aggressive after they found that it had expanded its use of fake personas impersonating United States (US) voters. Graphika found that this operation was using these fake accounts to spread divisive narratives about various social issues.
Through their research, Graphika identified fifteen “Spamouflage” accounts on X and another one on TikTok each claiming to be a US citizen or US advocate who was “frustrated by American politics and the West.” Graphika found that as the election has continued to draw closer, these accounts have “seeded and amplified content denigrating Democratic and Republican candidates, sowing doubts in the legitimacy of the US electoral process, and spreading divisive narratives about social issues.” While Graphika did state that many of these accounts were unable to garner significant online traction, they did highlight one instance where the TikTok account was able to garner 1.5 million views in one of its posted videos.
From their research, Graphika has concluded that they expect these influence operations to continue to ramp up their operational efforts over the coming months as they continue to leverage existing divisions to portray the US as a declining global power. Additionally, Graphika’s researchers have also predicted that these influence operations will expand on their existing strategies by implementing new tools, tactics, and technologies to create more deceptive content and increase the scale of their activities.
The Chinese Embassy has refuted the report’s findings and stated that the nation will not meddle in the US election.
The Knowledge.
While this most recent report may not come as a surprise to many, this research is indicative of the serious concerns surrounding the upcoming elections and how foreign nations are continuing to take actions to influence results and undermine voter confidence. For several months now, reports have continued to emerge regarding how foreign nations have begun to increase their efforts when spreading misinformation as well as when working to undermine political campaigns.
Despite officials having stated that this election is the most secure US election to date, reports related to misinformation and interference are being published at an alarming rate. For example, in August, reports emerged expressing concerns related to how Iranian actors were targeting both the Trump and Harris campaigns with phishing emails aimed at gaining access to sensitive information. While these phishing emails may seem minor at first glance, these efforts have already had major ramifications as staffers in the Trump campaign were duped by these emails, which allowed Iranian actors to be able to successfully infiltrate the campaign.
However, despite many foreign actors seeking to influence November’s elections, federal officials are working to combat these efforts. Aside from working with private social media platforms to remove fake accounts, the Biden administration announced yesterday a series of new sanctions against Russia in response to the nation's efforts to manipulate US opinions ahead of the election. With this announcement, both the Departments of Justice and Treasury stated they were imposing seizing thirty-two hostile internet domains and levying multiple sanctions against pro-Russian groups.
The Impact.
While the federal government can work with social media platforms to route out fake accounts, these efforts are at best reactionary solutions to a larger problem. As long as hostile influence operations can continue to create new accounts without any large-standing repercussions, they will continue their efforts to foment discontent and undermine voter confidence. By continuing to build upon the sanctions announced yesterday, the federal government can begin to better address these hostile influence operations and work to minimize their impacts on voters
While the average US citizen will not be able to stop these large influence operations, voters should remain vigilant over the coming months as influence operations and other similar actions will only continue to spread. By remaining vigilant and verifying information with trusted sources, voters can ensure that they are making informed decisions when they cast their ballots this November. For people working on political campaigns, workers should remain vigilant against hostile actions. Given Iran’s already successful phishing campaign, it is likely that similar efforts will be used over the coming weeks to target staffers with the hopes of gaining unauthorized to highly sensitive information.
Highlighting Key Conversations.
In this week’s Caveat Podcast, our team sat down with Josh Rosenzweig, the Senior Director of AI and Innovation at Morgan Lewis. Throughout this conversation, our team discussed the importance of managing generative AI and how to ensure security, governance, and compliance given AI’s rapid proliferation. As AI continues to become further entrenched in our society, we discuss this paradigm, what steps have already been taken, and what items still need to be addressed.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other Noteworthy Stories.
CrowdStrike executive to testify before Congress on IT outage.
What: A senior executive at CrowdStrike will testify before a US House of Representatives subcommittee on September 24th regarding the software update that caused a global IT outage.
Why: Last Friday, CrowdStrike announced that Adam Meyers, the senior vice president for counter-adversary operations, will testify before the House Homeland Security Cybersecurity and Infrastructure Protection subcommittee. With this announcement, Representative Mark Green stated, "considering the significant impact CrowdStrike’s faulty software update had on Americans and critical sectors of the economy…we must restore confidence in the IT that underpins the services Americans depend on daily.”
For context, this hearing was called after CrowdStrike released a faulty patch in mid-July that caused numerous services, such as healthcare providers, airliners, and banks to be heavily impacted.
X officially shut down in Brazil.
What: Over the weekend, Brazil began to take steps to shut down the social media platform, X, after the nation’s supreme court issued an order to shut down the site.
Why: After several months of legal battles, Brazil’s Supreme Court Judge, Alexandre de Moraes, issues an order to begin immediately suspending the site. With this order, Judge de Moraes stated that internet service providers in Brazil would be required to block X within five days. The order also implemented fines that would impose 50,000 reais, or roughly $9,000, fines to citizens who use VPNs to access the platform. The judge stated that this order would remain in effect until X complies with all previous court orders and pays all fines.
This suspension will block tens of millions of users in Brazil, one of X’s largest markets.