At a Glance.
- The EU states that Russia is spreading disinformation ahead of European elections.
- AI whistleblowers warn about the dangers of emerging technology.
EU claims Russia is spreading disinformation ahead of European elections.
The News.
Governments across Europe have accused Russia of spreading disinformation ahead of the European Union (EU) Parliament elections taking place later this week from June 6th to June 9th. Governments have stated that Russia’s actions are part of a wider campaign of interference that aims to undermine European governments and destabilize the region. European Commission President, Ursula von der Leyen, responded to these alleged efforts by stating that Europe has a choice to either be strong or submit to authoritarianism.
European leaders have stated these efforts have included:
- Spreading disinformation, with governments accusing Russia of manipulating the truth as they have spread conspiracy theories, deep fake videos, and published false information on “doppelganger” sites.
- Running the fake news site, voiceofeurope.com, which the Czech Republic has stated has been leading a pro-Russian influence operation in Europe.
- Russian officials paying Parliament lawmakers to promote Russian propaganda.
Russian officials have denied each of these claims countering by stating that rather Western nations are conducting their own “full-scale information war” to tarnish Russia’s reputation and cast it as an enemy.
The Knowledge.
With European Parliament elections occurring throughout the end of this week and the US elections taking place later this year, concerns surrounding misinformation and fake news are rapidly growing. In both regions, government officials have begun to routinely raise concerns regarding the rapid proliferation of misinformation and its influence on elections. Particularly, these concerns surround the use of these “doppelganger” sites as well as how misinformation is spread on social media applications.
Doppelganger sites are fake sites that are created to closely resemble legitimate ones. Malicious actors create these sites to purposely spread disinformation while making it appear as if it came from a legitimate source to increase credibility. For example, on a cloned version of the French Foreign Ministry website, an article was posted that alleged that France was planning to implement a new tax to help fund Ukraine’s war effort. After this article was posted, 200 fake Facebook accounts were used to spread this fake article.
These fake Facebook accounts represent another area of concern as disinformation has only continued to rapidly spread across many social media platforms. These concerns involve how easy and widespread it has become to spread disinformation on many of these platforms. For context, one survey found that sixty-seven percent of surveyed Americans stated they have found fake news on social media, with ten percent of adults stating that they have knowingly shared fake news. With the introduction and rise of artificial intelligence (AI), concerns have grown about how the technology will impact the spread of disinformation and the creation of deep fake. Already these impacts have begun to be felt, as in one notable example, AI was utilized to create a deep fake of President Biden during the New Hampshire primary elections to convince voters to not participate in that election.
The Impact.
While no major actions have been taken in either the EU or America to significantly address the rise of disinformation, citizens and lawmakers in both regions have become acutely aware of these issues and the impacts that they could potentially have on elections and nations as a whole. While some steps have been taken to address these concerns with the Federal Communications Commission (FCC) banning AI-generated voices from being used in robocalls, significant action would need to be taken to properly address these issues, especially when about election-related content.
However, while citizens await this action, people should remain vigilant when consuming news to ensure that news is only coming from trusted and verified sources. Additionally, people should always verify that their information is not coming from these “doppelganger” sites that are purposefully attempting to manipulate the truth. By properly vetting their information, citizens and lawmakers can ensure that they are making decisions grounded in reality rather than fiction.
AI whistleblowers warn of potential dangers.
The News.
On Tuesday, an AI developer group called for greater transparency with AI development and protections for AI whistleblowers. This group self-described themselves as “current and former employees at frontier AI companies.” In their letter, that group wrote that “as long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.” This letter continued stating that “broad confidentiality agreements block [them] from voicing [their] concerns, except to the very companies that may be failing to address these issues.” Both current and former employees from major AI development companies, including OpenAI, DeepMind, and Anthropic, signed this letter.
In addition to greater transparency, this letter also requested that these AI development companies go along with several guiding principles including not going “into any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concern” and investing in creating a culture of “open criticism.” The group expressed that they hope to mitigate various risks by creating better guidance from the scientific community, policymakers, and the public.
One of the AI development companies, OpenAI, responded to this group’s letter by stating that the company is “proud of [their] track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.” The statement added that the company supports efforts to promote rigorous debate and to engage with various governments and civil societies.
The Knowledge.
While this group’s letter marks a notable instance of numerous industry insiders raising their concerns surrounding emerging technology, these concerns are not new. As AI has continued to rapidly grow in development and use, concerns surrounding the technology’s safety and respect towards human rights have been at the forefront of the conversation for some time. Numerous government officials and researchers have raised concerns related to ethical and security matters. These include concerns are related to privacy rights, security issues, and the potential for bias among others.
While various governments have begun to take action to better regulate the technology, as seen with the EU’s AI Act and NIST’s AI Risk Management Framework, this process must be multifaceted, as this group argues. While some companies have taken actions to increase their transparency and accountability, as seen with OpenAI creating its internal safety review board last week, these efforts are not taking place throughout the entire industry nor have they had any meaningful impacts yet. For context, OpenAI’s Safety and Security Committee was announced last week and will be led by company directors, Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO, Sam Altman. This committee will be tasked with reviewing the company and providing public recommendations after ninety days on what the company can improve upon.
The Impact.
While no major policy developments have occurred from this letter, this move is indicative of the concerns surrounding the AI industry. Over the past several months, developers, policymakers, and stakeholders have routinely raised their concerns about how AI is being developed and utilized and what safeguards have been put into place to secure the emerging technology. While these large questions surrounding the technology will not be fully addressed for some time, AI users and developers should be aware of these concerns and need to properly account for them especially regarding validating data outputs and securing the technology to prevent any malicious exploitation. For those involved in creating and advancing AI, people should be aware that government involvement and oversight will only continue to grow and will most likely have tangible impacts regarding how AI can be developed and used as well as what safeguards need to be implemented.
Other Noteworthy Stories.
Russian disinformation campaign takes aim at Paris Olympics.
What: Microsoft states that a Russian disinformation campaign is targeting the upcoming summer Paris Olympics games.
Why: In a blog post published on Sunday, Microsoft stated that Russian-led disinformation is targeting the upcoming Paris summer games, including utalizing falsified news websites and a feature-length documentary aimed at muddling the reputation of the International Olympic Committee (IOC). In the blog post, Microsoft wrote that “the most worrisome disinformation advanced by pro-Russian actors has sought to impersonate militant organizations and fabricate threats to the games amidst the Israel-Hamas conflict.” Microsoft elaborated further on the film highlighting how the film had AI-generated audio that impersonated Tom Cruise and used the application, Cameo, to trick other US celebrities into endorsing the film. The company added that “the Kremlin's propaganda and disinformation machine is unlikely to hold back in leveraging its network of actors to undermine the Games as the Olympics draw near.” Additionally, Microsoft noted this campaign increased its efforts late last year after the IOC allowed Russian athletes to compete in the games as neutral competitors.
Poland to boost cybersecurity after fake news attack.
What: Poland’s government has announced that it will invest significantly to boost the nation's cybersecurity.
Why: The digitalization minister of Poland announced on Monday that he will spend over three billion zlotys to boost the nation’s cybersecurity efforts. This increased funding effort comes after the state news agency, PAP, was targeted by a cyberattack attributed to Russian attackers. As Poland’s European parliament elections approach this upcoming Sunday, authorities announced that they are on high alert regarding any attempts by Moscow to interfere with the vote. These concerns come after the Polish government has repeatedly accused Russia of attempting to destabilize Poland due to the nation’s effort to provide military aid to Ukraine.
With this announcement, the Polish government stated that they are looking to invest in and create a “Cyber Shield.”
Microsoft announces it will take more steps to allay the EU’s concerns surrounding Teams.
What: Microsoft will take additional steps to resolve EU concerns surrounding Teams.
Why: On Tuesday, Microsft announced that it will be taking additional steps to resolve the EU’s concerns regarding the company’s Teams application following an antitrust investigation. This investigation ties back to a complaint filed in 2020 by Salesforce.
While Microsoft has already stated that it would sell the Teams application separately from its Office products in April, the company has signaled that it was ready to take additional actions “to find a resolution to regulators’ concerns.” While it is unclear what these steps will fully entail at this time, this antitrust investigation could have significant impacts on the Teams application over the coming months.