At a glance.
- Automating disinformation.
- China's emerging disinformation policy and practice.
- GhostWriter and the European Union.
- Deepfakes? How about a shallowfake?
Automating disinformation.
Researchers at Georgetown University’s Center for Security and Emerging Technology (CSET) warn, Breaking Defense reports, that advanced technology now offers the possibility of producing large quantities of disinformation at scale. They've looked at OpenAI's Generative Pre-trained Transformer (GPT-2 and GPT-3, the second and third versions of the tech) and believe that what they call "autocomplete on steroids" "can be prompted with a short cue to automatically and autonomously write everything from tweets and translations to news articles and novels." The automation works best when the machines are teamed with humans; the natural language processing is good at picking up what might loosely be called attitude and tone, and can readily move on from a small number of human operator prompts. Breaking Defense quotes one of the researchers, Micah Musser, as saying, “Human-machine teaming allows disinformation operators to scale up their operations, while maintaining high-quality outputs and the ability to vet those outputs.”
That won't necessarily mean that humans will find it easy to ride herd on AI. It's not clear, the Georgetown researchers say, whether the sort of technology they looked at has been used in the wild, in part because it's so difficult to detect (GPT-3 is said to be especially good at Tweeting). A discussion of human-moderated AI in Protocol suggests that the process is entropic, that human moderators will continue to find it difficult to control what's artificially generated.
A new interest (and facility) in disinformation from Beijing, and its place in China's national strategy.
World Politics Review takes a look at China's activity in cyberspace and notes growing focus on, and sophistication in, disinformation. The Chinese Communist Party and the government it operates are old hands at propaganda, but that propaganda has for decades had a largely domestic audience, and, when it reached foreign audiences, bore the characteristic stiffness of the Maoist ideologue. But now Beijing is, in the Review's words, "taking a page from Russia's playbook." The entries in that page include "the use of accounts in multiple languages [including English, Russian, Spanish, German, Korean and Japanese] across many different social media platforms" and "attempts to physically mobilize protests on the ground." They also include the mobilization of troll accounts in social media to serve various influence campaigns.
At least one difference between the familiar Russian approach and the one being taken by China, as described by the Review is that Chinese disinformation normally has an easily stated positive goal (positive from the point of view of Beijing). "We didn't make the coronavirus; the Americans did," would be one example. Russian disinformation is both more fluent and more opportunistic, directed, as we've had occasion to mention before, more at increasing the adversaries' friction than at persuading the world to any particular beliefs.
The assessment in World Politics Review is that the development of Chinese disinformation operations is worth noting, but that it's not, at least not yet, cause for serious alarm:
"Interesting though these developments are, we shouldn’t exaggerate the danger they represent. To really 'win' at cyber disinformation requires a recklessness and devil-may-care attitude about the consequences, aimed at amplifying social divisions and sowing mistrust in authority and government among one’s adversaries regardless of the costs. This is where Russia excels, building on techniques honed during the Cold War era. Beijing still seems to be feeling its way on how much is enough and how far is too far, and this may help to explain the somewhat clunky approaches that we have seen to date."
Domestically and internationally, China has taken note of the role of automation in influence operations. The Record reports that recently enacted “Internet Information Service Algorithm Recommendation Management” regulations designed to establish standards for algorithmic recommendations to ensure that they "promote socialist core values." That is, Beijing wants to see more solidarity and resolution and collective progress and a lot less, for example, k-pop groupies or twerking attention hounds. (Or, for that matter, devout Muslims or Christians, we suspect.) There's concern about international presentation, to be sure, but here again the focus is principally domestic. The report concludes with a chilling conclusion: some of the language surrounding the present official climate of opinion is disturbingly reminiscent of what was said at the outset of the Great Proletarian Cultural Revolution, which lasted from 1966 until Mao's death in 1976. Estimates of the Cultural Revolution's death toll range from the hundreds of thousands to some twenty million. That degree of suffering doesn't seem to be in the offing, but the official tropes aren't reassuring.
GhostWriter and European (especially German) elections.
The European Union last Friday publicly attributed the GhostWriter cyberespionage and disinformation operation to Russia. The statement said:
“Some EU Member States have observed malicious cyber activities, collectively designated as Ghostwriter, and associated these with the Russian state. Such activities are unacceptable as they seek to threaten our integrity and security, democratic values and principles and the core functioning of our democracies.
“These malicious cyber activities are targeting numerous members of Parliaments, government officials, politicians, and members of the press and civil society in the EU by accessing computer systems and personal accounts and stealing data. These activities are contrary to the norms of responsible State behaviour in cyberspace as endorsed by all UN Member States, and attempt to undermine our democratic institutions and processes, including by enabling disinformation and information manipulation.
“The European Union and its Member States strongly denounce these malicious cyber activities, which all involved must put to an end immediately. We urge the Russian Federation to adhere to the norms of responsible state behaviour in cyberspace.
No immediate action was announced, but as the statement’s final sentence warned, “The European Union will revert to this issue in upcoming meetings and consider taking further steps.”
The attribution and warning didn't say which nations had received the attentions of GhostWriter, but, as the Washington Post notes, the timing of the communiqué suggests concern for Germany, which held elections over the weekend. The outcome of that election makes it likely that a center-left coalition led by Social Democrats with the smaller Green and Free Democrat parties as its partners will form the government that will succeed retiring Chancellor Angela Merkel’s.
An essay in Foreign Policy sees signs of a growing disposition in Germany's electorate to give credence to conspiracy theories, and glumly predicts that some such are likely to gain traction as the new government forms.
Shallowfake: the curious case of that Ozy phone call.
The New York Times had a very odd story this past weekend about new media company Ozy Media. "This past winter," the Times writes, "Goldman Sachs was closing in on a $40 million investment in Ozy, a digital media company founded in 2013, and there seemed to be a lot of reasons to do the deal. Ozy boasted of a large audience for its general interest website, its newsletters and its videos, and the company had a charismatic chief executive, Carlos Watson, a onetime cable news anchor who had worked at Goldman Sachs early in his career. And, crucially, Ozy said it had a great relationship with YouTube, where many of its videos attracted more than a million views."
Goldman Sachs had scheduled a call that would give them, among other things, some details about Ozy's relationship with YouTube. One of the key participants, Alex Piper, who leads unscripted programming for YouTube Originals, said he was having trouble with Zoom, and asked that the meeting be switched to a traditional conference call, to which the others agreed. Mr. Piper went on to tell everyone things were great, just Jake, with Ozy, and that the company's CEO Carlos Watson ("charismatic," according to the Times) was a swell guy and everything he was cracked up to be. But Mr. Piper's voice started to sound kind of weird as the call went on, and, when Goldman Sachs reached out to Mr. Piper over at YouTube, the poor man was confused: it wasn't him at all, but some other guy pretending to be him. Mr. Piper was never on the call at all. It was someone pretending to be he. That pretender turned out to be, Ozy said apologetically, Samir Rao, Ozy's co-founder and chief operating officer. Mr. Watson put the episode down to an unspecified mental health crisis on Mr. Rao's part, saying that Mr. Rao he had taken some time off and has now been compassionately received back at work.
Anyway, Goldman Sachs walked away from the deal, even though both Messrs. Rao and Watson are Goldman Sachs alumni. And Google, which owns YouTube, wasn't amused at all, since in Mountain View's opinion a crime may have been committed. Google reported the matter to the FBI.
However this turns out, one lesson emerges for those interested in disinformation: you don't need a deepfake to fake. Sometimes just making your voice sound like someone else might work long enough to do the job. But try to make it plausible: no one is going to buy a phone call from Minnie Mouse or Betty Boop or Richard Nixon or Spongebob.