At a glance.
- Once and future deepfakes.
- In Russian cyber ops, influence ops predominate.
- Razzing as an influence technique.
Once and future deepfakes.
Trend Micro has published a report looking at the current and future impacts of deepfakes. The researchers note that deepfakes have already been used in social engineering attacks, and these attacks will increase as the technology improves:
- “There is enough content exposed on social media to create deepfake models for millions of people. People in every country, city, village, or particular social group have their social media exposed to the world.
- “All the technological pillars are in place. Attack implementation does not require significant investment and attacks can be launched not just by national states and corporations but also by individuals and small criminal groups.
- “Actors can already impersonate and steal the identities of politicians, C-level executives, and celebrities. This could significantly increase the success rate of certain attacks such as financial schemes, short-lived disinformation campaigns, public opinion manipulation, and extortion.
- “The identities of ordinary people are available to be stolen or recreated from publicly exposed media. Cybercriminals can steal from the impersonated victims or use their identities for malicious activities.
- “The modification of deepfake models can lead to a mass appearance of identities of people who never existed. These identities can be used in different fraud schemes. Indicators of such appearances have already been spotted in the wild.”
Trend Micro says organizations should implement multifactor authentication (preferably biometric authentication if possible), and train their employees to look for visual discrepancies that are often present in current deepfakes. Some deepfakes, however, are already indistinguishable from the real thing, so the researchers conclude: “Significant policy changes are required to address the problem on a larger scale. These policies should address the use of current and previously exposed biometric data. They must also take into account the state of cybercriminal activities now as well as prepare for the future.”
AwareGO outlines the risks deepfakes pose for mis- and disinformation on social media, particularly during election cycles. Users should check reputable news sources to confirm what they read on social media, especially when it seems sensational or outrageous. They make mention of three tells that often betray misinformation and disinformation. They're not dispositive evidence that a story is bogus, but they're worth keeping in mind:
- "Clickbait – A headline is more sensationalized than the article’s content warrants. This is done to gain social media engagement, but it can distort the truth."
- "Misleading headlines – Headlines that don’t look too exaggerated but are still misleading or present the story in a different light from the actual content."
- "Satire and parody – Meant to be just for fun but might look like the truth to some."
Don't think satire and parody obvious and harmless. It can be surprising how many people find these difficult to recognize. Just a week ago, you may recall, a "sarcastic" post by a German journalist was one of the principal ways fake news about a coup in China found its way into mainstream reporting.
Deepfakes also may be finding a place in the entertainment market. Actor Bruce Willis, for example, was said to have sold the rights to his "digital twin," presumably his recreated likeness, to a Russian firm archly called "Deepcake." The BBC reports that Mr. Willis has since denied having done so, but the appropriately fake story raises certain possibilities. Matt Moynahan, President & CEO of OneSpan, thinks the technology is growing ubiquitous:
“High profile cases like this point to the growing ubiquity of deepfake technology in our society. Being able to convincingly replicate an individual’s physical likeness has real-world implications for both consumers and businesses, especially as we begin to interact more in Web 3.0 environments. Regardless of whether Willis has consented to his image being used, the issue of consent will not stop cyber criminals from stealing the likeness of others.
"The lack of security and identity protection is growing rapidly in the development of Web 3.0. This issue of fake users and bots, already endemic throughout today’s internet, is likely to plague future digital interactions. A cautious and security-first approach must also be applied to future digital interactions within Web 3.0. Often, security has been focused on securing end-to-end processes. However, the growing threat of deepfakes shows there’s been a lack of securing and authenticating the actual interactions between people or companies. Organizations must take a step back and recognize how they are exposed as they transition to Web 3.0. The answer rests on authenticating and identifying all involved parties.”
An overview of the state of Russian cyber operations in the hybrid war so far.
Lawfare describes the relative unimportance of cyber operations in Russia's hybrid war. The reason for this, an essay argues, is not that Russia has no serious cyber capability, but rather that there's little use for cyber operations in a war of the kind presently being waged. There's no doubt something to that, and cyber operations may indeed have limited battlefield potential, although some counterexamples come readily to mind. What about activity traditionally conducted as electronic warfare, like jamming or deception? What about disruption of civilian and military infrastructure, like power distribution systems? Both have clearly been shown to be within Russian capabilities, yet they don't seem to have had significant effect.
Lawfare does make a strong case that influence operations have become more important to Russia than any of the more destructive attacks that had been widely feared and expected. "Though cyberwarfare did not occur as anticipated in Russia’s war against Ukraine, it has played an important role from the start," the essay concludes. "The engagements in cyberwar have left the United States and its allies with two challenges: determining how to handle information warfare and developing an understanding of how the particular set of actions in this war change our perception of how cyberwar might—or might not—take place in future conflicts."
But such influence operations also seem to have fallen short of what Moscow had desired, at least insofar as either wooing or frightening the foreigners has progressed. See, for example, the strong sense of isolation communicated by official Russian propagandists in this broadcast designed for domestic consumption. Dmitry Sablin, Deputy Chairman of the Defense Committee, says that Russia has no allies and should expect none, that it needs to rely on its own resources for victory and survival. (An interlocutor's observation that Belarus is an ally prompts laughter.)
Razzing as an influence technique.
The North Atlantic Fella Organization (of NAFO, also properly known by its French acronym, OFAN) has been waging a meme war against Russia over Mr. Putin's war against Ukraine. The casual, self-organizing group of hacktivists posts derisive pictures and text which invite an official Russian reply, and it's the replies themselves that amplify the original meme. The characteristic picture NAFO uses is a doge, a Shiba Inu, often crudely photoshopped into a military uniform, sometimes distorted, now and then accompanied with a snippet quotation from a Russian official. Ambassador Mikhail Ulyanov's "You pronounced this nonsense. Not me," has been a particular favorite. CyberScoop has an account of the Fellas that's generally positive, and that credits their dada approach with scoring some success. They quote founder Matt Moores, "The power of what we’re doing is that instead of trying to come in and point-by-point refute, and argue about what’s true and what isn’t, it’s coming and saying, ‘Hey, that’s dumb,’ And the moment somebody’s replying to a cartoon dog online, you’ve lost if you work for the government of Russia.”
The late science writer Martin Gardner used to say, when writing about particularly absurd forms of pseudoscience, that a belly laugh was better than a refutation, and similar thinking seems to underlie NAFO's thinking. It's not argument; it's not even really satire. It's heckling. It's not Swift, still less La Rochefoucauld, not even Dorothy Parker. It's a Bronx cheer, with just a little more semantic load. When the history of the first hybrid war is written, it will be interesting to see the footnotes devoted to the Fellas. In the meantime, the Fellas are at least raising money for our Lady of Top Attack.