At a glance.
- An overview of COVID-19 disinformation.
- Spontaneous popular misinformation.
- The difficulty of scaling a response to disinformation.
A quick review of COVID-19 disinformation.
The most interesting disinformation operations currently running are coming from Tehran, Beijing, and Russia. They have an international reach, to be sure, but they also have a significant domestic audience that may well be their primary target.
Consider Iran, where the line has been well-represented by the official Mehr News Agency: the US is the bad guy in the pandemic. Mehr quotes Leader of the Islamic Revolution Ayatollah Seyyed Ali Khamenei, who addressed the Iranian nation live on Sunday. He took up a US offer to send medical supplies to Iran, which continues to suffer heavily from the virus. "Also, you Americans are accused of having developed the virus yourself," the Ayatollah said. "You cannot be trusted. What if the medicine you delivered to Iran actually caused the virus to stay?”
The accusation that the Americans developed COVID-19, or at least initiated its spread, came of course from China's Foreign Ministry. The distinctive Iranian contribution to this particular charge, Foreign Policy argues, is to wrap it in traditional antisemitism, deflecting domestic anger toward the traditional Great Satan and Lesser Satan.
China's information campaign has proved an interesting mix of Russian-style confusion, with supposition and speculation knowingly amplified in various receptive channels and designed, as CyberScoop reports, more to darken counsel than to achieve any positive persuasion. There's also an interest, the US State Department complains, in downplaying the extent of the pandemic in China itself, and in exaggerating the success and extent of the recovery. But there's a domestic angle in China, too: Beijing is concerned to obscure what the Wall Street Journal called the regime's early "missteps" in controlling the epidemic.
And Russia? As Foreign Policy maintains, the disinformation is mostly for domestic consumption: not much pandemic to see here; move on.
Madness of crowds, meet wisdom of crowds.
A great deal of misinformation is not disinformation, even in its origins. Spontaneously generated baloney, often in the form of conspiracy theories ("COVID-19 was built in a biowar lab!" "The neighbors are spreading coronavirus!" "The National Guard is being federalized to enforce martial law!") constitutes no small fraction of the pernicious noise filling the Internet. The Internet, of course, has evolved to spread content far and fast, and social media especially are adept at doing so in ways that, for good or ill, seem beyond the reach of any but the most primitively restrictive and totalitarian regimes (like the one in Pyongyang) to control. The Washington Post marvels at the ways in which texts, emails, WhatsApp, and TikTok run a mile before proper journalism and government officials even have their skates on. Naked Security says that Facebook is considering trying to put the brakes on its platforms' adaptability to that sort of use by disabling Facebook Messenger's ability to mass-forward messages.
So there's madness, but also something better. The Telegraph reports that nurses and police officers in particular have taken to Facebook to calm people down and give them a handle on what's actually known about the current pandemic. And Nextdoor, much beloved of neighborhood snoops and scolds, has actually turned much nicer in recent weeks, with neighbors volunteering good offices and offering encouragement. Sure, there's a small admixture of complaining, the Washington Post notes, but on balance people are thinking locally and acting locally, in entirely commendable ways.
The disinformation-control instruments that easily scale tend to be blunt instruments indeed.
The CEO of Digital Content Next, Jason Kent, yesterday posted an open letter to "the 3rd Party Ad Verification Industry"—he's thinking particularly of Google, Oracle, Double Verify, and IAS—in which he makes the case that automatically flagging COVID-19 reporting on media sites as "not brand safe" does a disservice to everyone. He asks that the industry exempt "premium, trusted media providers" by default from the brand safety filters that cluster around keywords touching the pandemic. He also urges that the advertising and marketing industry talk with their clients about ways in which responsible coronavirus news might be supported, and that companies and organizations consider allocating some of their advertising budgets to public service announcements about the pandemic.
Digital Content Next is a trade group (formerly known as the Online Publishers Association) is calling in effect for a cooperatively developed white list. "Ad verification" is the process by which online advertising platforms seek to ensure that the ads they sell appear in the right context, and on sites that won't bring discredit to the advertisers who pay the platforms' freight. It's first cousin to the sort of content management platforms and governments use to counter disinformation. The problem is that the easy ways of verifying ads, the ones that don't rely on an expensive crew of human watchstanders, tend to rely on such proxies for content as keywords. A term like "COVID-19" is heavily associated, for now, with various kinds of outright fraud: bogus vaccines, lunatic cures, come-ons for nonexistent benefits. But it's also a term that appears in an overwhelmingly large number of news stories, and so the news outlets (who themselves depend on advertising revenue) are getting stiffed, and suffering, when ad verification steers advertising away from them.
So what happens when content moderation is turned over to the algorithms? Facebook, Twitter, and YouTube all announced last week that they were sending many of their human fact-checkers home for the duration of the COVID-19 emergency. What happened can easily be imagined (or simply read, here in WIRED's account). The shift to automated fact-checking and fraud detection resulted in large-scale suppression of much legitimate reporting and commentary, simply because the automated tools were flagging content on the basis of simple rules that necessarily take no notice of intensionality. A tweet from Facebook's former CISO, Alex Stamos, was particularly sharp: "It looks like an anti-spam rule at FB is going haywire. Facebook sent home content moderators yesterday, who generally can't WFH due to privacy commitments the company has made. We might be seeing the start of the ML going nuts with less human oversight."
Twitter has learned, reports SC Magazine, from Facebook's experience. At least it will be open to quick restoration of the accounts it mistakenly bashes. But like Facebook, Twitter's task is a very difficult one. It too is committed to greater automation, because to operate at scale it has to be. And it's also adopting an expansive notion of what constitutes harm. But willingness to reassess and adapt, the platform says, will be its touchstone: "As we’ve said on many occasions, our approach to protecting the public conversation is never static. That’s particularly relevant in these unprecedented times. We intend to review our thinking daily and will ensure we’re sharing updates here on any new clarifications to our rules or major changes to how we’re enforcing them."
"This post goes against our Community Standards against spam," as Facebook's algorithm puts it in the kibosh messages the machines spam out. These hit perfectly valuable, even important, communications about the pandemic. Do they work as well against, say, the vile antisemitism currently emanating from Tehran? Steel against knowledge, gossamer against error.