Twitter engagements after the combat death of Major General Soleimani.
Twitter has been flooded with posts vowing revenge for the combat death of Major General Soleimani, commander of Iran's Quds Force, CyberScoop reports. The Atlantic Council has been tracking the social media hashtags #HardRevenge and #DeathToAmerica, both of which had been introduced before US forces killed Iran's principal commander responsible for mobilizing Shi'ite militias in the Arab world. The hashtags are self-explanatory; they figure in what appears to be an organized campaign that represents itself as grassroots, but which is probably astroturf laid down by Tehran. Popular astroturf, but astroturf nonetheless.
There's been some more conventional exploitation of Twitter, too. The Daily Beast describes how Twitter took down at least two accounts this week that were impersonating journalists (one from the New York Post, the other an Israeli writer) and distributing bogus stories in the interest of Iran. Again, it's unclear who's directing the impersonation, hacktivists or the Iranian government itself. False flags, as McAfee researchers point out, are to be expected during periods of heightened tension. But as problematic as attribution of disinformation is likely to remain, the Telegraph argues that Iran has built a significant information operations capability.
A case study in rumor control.
Rumor control has been a prominent option for governments working against disinformation. Among the other sequelae of the US drone strike that killed General Soleimani was a rush on two US Government sites, one belonging to the Selective Service Administration, which would administer any wartime conscription Congress might decide to enact, and another operated by Federal Student Aid (FAFSA), an agency of the US Department of Education. The rumor, which apparently emerged spontaneously, was that the US was about to begin drafting people into the military. As the Selective Service Administration (effectively a caretaker agency since conscription ended in the US in 1973) tweeted Friday morning, "Due to the spread of misinformation, our website is experiencing high traffic volumes at this time. If you are attempting to register or verify registration, please check back later today as we are working to resolve this issue. We appreciate your patience."
FAFSA was involved because of the longstanding requirement that men must register with Selective Service if they are to receive Federal financial aid. FAFSA tweeted, Friday afternoon, "We know there are questions on this…registering with Selective Service has been a longstanding requirement to receive federal student aid/a federal job. However, the U.S. military has been all-volunteer since 1973 & Congress would need to pass a new law to institute a draft." President Carter directed in 1980 that registration for Selective Service resume, and that's where conscription has remained ever since: a requirement that young men register (recent court decisions may extend that requirement to young women as well). That's about it.
Selective Service persists in its current rump state because, as President Clinton explained to Congress in 1994, "Maintaining the Selective Service System and draft registration provides a hedge against unforeseen threats and a relatively low-cost 'insurance policy' against our underestimating the maximum level of threat we expect our Armed Forces to face." Fears of a draft seem to have abated somewhat over the weekend, at least enough so that the Selective Service website has become normally accessible again, but the sentiment may resurface. It's become a minor meme, for one thing, and for another it seems to cater nicely to that reenactor's impulse that so often appears to underlie American meditations on both politics and war: pretending to be the SDS circa 1968 has more in common with pretending to be in the Army of Northern Virginia circa 1863 than anyone tempted to one or the other might be inclined to think. Military.com has a useful guide for the perplexed, and really, it's not that complicated.
Why then spend any time thinking about this meme? After all, as the Military Times pointed out, "Qasem Soleimani is not Franz Ferdinand." Nor, they might have added, are mass armies anywhere nearly as important in 2020 as they were in 1914. But the swift propagation of fear about any low-probability event--and the resumption of the draft in the US is an extremely low-probability event--is always instructive. Government agencies charged with handling disinformation, especially election season information, might well study this episode for useful lessons learned.
Another case study: more on Taiwan's work against Beijing's disinformation campaign.
Taiwan holds its presidential election this Saturday, and the outcome will be worth watching to see how the island republic handles disinformation and influence campaigns mounted against it from the mainland. TechCrunch, which has a review of the country's preparations, notes that the response so far has involved a mixture of public policy, government action, and private initiative. It's complicated: Taiwan's domestic political actors themselves have an unusually vigorous and combative presence online, which can make it difficult to disentangle home-grown electoral politics from foreign meddling. Buzzfeed has an account of what it calls a new breed of public relation firms that specialize in disinformation-as-a-service. There are several active in Taiwan, and one of them offers to “use every tool and take every advantage available in order to change reality according to our client's wishes.”
Beijing seemed to decrease the optempo of its disinformation campaign last month, amid some international speculation that it regarded its preferred candidate's party as a probable loser, and so decided not to throw good money after bad. But this seems to have changed: Foreign Affairs says the mainland returned with a strong push as Taiwan's campaign entered its final week. China also hasn't been shy about strong-arming Western companies into toeing the Party line on Taiwan. Quartz has a couple of recommendations for people writing about the disinformation that crosses the Taiwan Straits: don't go on about "reunification," or refer to Taiwan as a "breakaway province." Those, Quartz says, amount to adopting Beijing's point-of-view.
Indonesian information operations in Papua.
Information operators are usually thought of as operating in the shadows, but not always. Witness Corporal Yunanto Nugroho, whom the Indonesian Army recognized publicly on National Heroes Day for the awards he's won in information technology. What Corporal Yunanto does, according to Reuters, is run a coordinated network of inauthentic websites, many of which feature fabricated quotations and other forms of fake news directed at ethnic Papuans and intended to dissuade them from separatist activity. Reuters tracked the sites to Corporal Yunanto when they noticed that the sites were registered to a single mobile number belonging to Yunanto Nugroho.
Looking ahead to 2020 influence operations.
Foreign influence operations directed at the US 2020 elections have already begun. The Washington Post reports that some veterans' sites have been hijacked by Russian operators with a view toward disrupting the upcoming campaign. Veterans, who have the reputation of higher rates of voting than most other demographics, are particularly attractive to the St. Petersburg troll farms, and the Post complains that the US Administration has been asleep at the switch when it comes to doing anything about it.
The New York Times has obtained an internal Facebook memo by social network numero and vice president of virtual reality Andrew "Boz" Bosworth, in which he cautions against working against the re-election of President Trump. As much as he "desperately" wants to see Mr. Trump defeated, and as much as he donated to candidate Clinton ("the max"), he doesn't think it's Facebook's place to work for that defeat. Candidate Trump won because of Facebook, the Boz thinks, but not because of Russian influence. Rather, the campaign made unusually astute use of the platform.
That Cambridge Analytica scandal that rocked Facebook? In the eyes of Palo Alto, "Cambridge Analytica is a total non-event. They were snake oil salespeople. The tools they used didn’t work, and the scale they used them at wasn’t meaningful. Every claim they have made about themselves is garbage." His conclusion, after an excursion through Rawlsian moral philosophy, is this: "My takeaway is that we were late on data security, misinformation, and foreign interference. We need to get ahead of polarization and algorithmic transparency."
It's difficult for anyone to look good, let alone wise, in a leaked internal memo, but in this case Bosworth's assessment of the negative, disruptive line of Russian influence operations, and of the way those operations found amplification through various other, independent channels (including the sale of snake oil) is worth a look. Consider Facebook's new policy against "manipulated media," released on January 6th. Excluding parody and satire, Facebook will henceforth take action against material that meets two criteria:
- "It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- "It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic."
It remains to be seen whether those criteria can be applied with success approaching that which Facebook has achieved in screening for "coordinated inauthenticity." The Washington Post thinks the exception for parody and satire big enough to drive a truckload of influence through, for example (they're particularly exercised about the video doctored to make Speaker of the House Pelosi look drunk or insane, which arguably passes the satire test) and the Post isn't alone in this respect. SC Magazine interviewed a number of figures in the security industry who offered essentially the same take, and ESET's We Live Security provides a milder set of similar animadversions. A post at One Zero calls it "better than nothing," but that's about the highest praise on offer. Naked Security enjoys itself by pointing out that "cheapfakes and shallowfakes" aren't banned at all. If your imposture is coarse and clumsy enough to be immediately apparent to the casual observer, then, hey, no problemo.
In fairness to the House of Zuckerberg, however, there have been few specific suggestions for how Facebook might have done better. It's a tough and intractable problem, with deep roots in the distinction between intensional and extensional contexts. No one has the remotest idea of how to distinguish truth from lies at scale. The US House Committee on Energy and Commerce yesterday held hearings on "Manipulation and Deception in the Digital Age." Facebook was the only business to testify.
A side note, more of a question: would unacknowledged sponsored content be flagged as inauthentic, under Facebook's polices? Consider the complimentary piece about Facebook's efforts to ensure election integrity that Mashable flayed Teen Vogue over. It had a bogus byline, wasn't marked as sponsored content, then was, then wasn't, and then finally pulled, but not before Facebook CEO Sheryl Sandberg shared it on her personal Facebook page. Get ahead of that algorithmic transparency, kids. Unless of course you're a junior NCO bucking for a commendation in the Indonesian Army.
Deepfakes in information warfare.
Foreign Affairs thinks that deepfakes will soon become a widely adopted geopolitical tool. It's not just that they can be so persuasive, but that they're also so easily disseminated. And detecting them isn't a trivial problem. US Special Operations Command last month issued a Request for Information (an RFI, not yet a Request for Proposals) that asks about commercial-off-the-shelf software that could be used as a "prototype for use of understanding the information environment that can detect misinformation, disinformation and mal-information campaigns in near to real-time to directly support information operations within Special Operations Command. The resulting analysis will be surfaced on a user interface and contextualized by the relevant narratives and network of accounts. This software must leverage an intuitive cloud-hosted user interface software which provides processing and analyzing multi-modal social media and web data and has the programmatic ability to dissect and categorize information sectors. The government requires a prototype data pipeline that will identify viral and trending content for threat assessments and score data with a ranking system that highlights the likelihood of being fake or deceptive and display the information on the user interface." It checks many familiar blocks: "deep learning, natural language processing, and dynamic network analysis." In effect, Special Operations Command wants a tool that will recognize lies and the intent behind them, at scale and in near-real-time. This will be a tall order.
Deepfakes become a commodity.
TechCrunch reported over the weekend that Snap has acquired AI Factory, the company whose technology underlies Snapchat's Cameo feature. Cameo animates a user's selfie and inserts it into a short video. This isn't, of course, necessarily illegal or even deceptive, but the acquisition looks like a harbinger of the commodification of deepfake technology.
Bloomberg has offered an account of how deepfakes can be produced. Their leading examples are demonstration videos of former President Barak Obama using an obscenity to refer to President Trump and of former President Richard Nixon doing a comedy routine, both of which, the reporters assure us, are complete frauds. (We believe that Mr. Obama didn't say what the video represents him as saying. About Mr. Nixon we're less certain--he did after all do that "sock it to me?" schtick on Laugh-In. Funny guy.)
The Telegraph, out at the Consumer Electronics Show in Las Vegas, marvels at the "virtual human" technology on display there. The technology isn't intended as a tool for purposes of disinformation (there's much hoopla about second lives, avatars, and the like) but it's not a stretch to see its applications there.
There's also a healthy market for shallowfakes, and there's a minor subsection of the marketing industry that will provide them. Demand is largely driven, the Washington Post observes, by the dating industry, which needs more pictures of various varieties of women to draw the attentions of various varieties of prospective daters. This is of course an evolution of the old saloon "ladies' night" convention (as in, "Ladies: no cover, no minimum) and another genre we're all familiar with: the stock images long peddled to corporations for use in brochures and websites (like the ones an American SETA contractor might post--happy twenty-somethings romping through a field of wildflowers on their way to their fannnies-in-seats contract work down at the Federal Building). Nothing illegal or improper about this, especially if the pictures remain nothing more than eye-candy and don't misrepresent themselves in the service of fraud. It is worth noting, however, that the more convincing shallowfakes could serve catphishing, phony pulse-of-the-voter stories, and so on. The images are pretty convincing. We looked at the ones in the Post article and found ourselves inclined to say, hey, we think we ran into that one at a conference or somewhere.
On the importance of not believing your own propaganda (or at least of providing corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative).
Radio Free Europe | Radio Liberty this week offered an interesting historical look at what happens within a totalitarian state when it invests so heavily in disinformation that it loses its grip on the plausibility of its own lies. The case they describe is one in which Stalin's organs, then led by Nikolai Ivanovich Yezhov, People's Commissar for Internal Affairs during the Great Purge, arrested a photographer and a photographic retoucher on the preposterous charge that they had placed an image of Trotsky's face into a picture of some trees that appeared in a news photo. The resemblance is difficult enough to perceive (not nearly as good, for example, as Waffle Stop's Elvis on toast), but scrutinized with the eye of Stalinist faith, the NKVD was convinced they saw Trotsky, at that time the principal fiend in Soviet demonology and the moving intelligence behind all wrecking, espionage, and left deviationism.
The investigators arrested both the photographer and the retoucher, Vsevolod Skamandr and Vladimir Tsetnarovsky, respectively, and subjected their negatives to close forensic inspection. Both men were eventually cleared of that particular charge, but sadly things didn't end well for either of them. Tsetnarovsky was released after a year of strict interrogation in prison; he was conscripted during the Second World War and died on the front. Skamandr was also found not guilty of disseminating Trotsky's image, but he was forced to confess falsely to espionage and was liquidated shortly thereafter in 1937.
People's Commissar Yezhov himself was purged and shot on February 4th, 1940. Trotsky lasted a little longer, but not much: he died in Mexican exile on August 21st, 1940, the day after an NKVD assassin attacked him with an ice ax. The ways in which the horrors of the Red Terror were backward-striking are well-known. It remains an open historical question as to how much of the terror and manifest injustice were intentional (Lenin's secret police chief, Iron Felix Dzerzhinsky, said all of it was intentional, that these were materialist features, and not idealistic bugs) and how much were induced by individual fear and institutional paranoia.