At a glance.
- Two cautionary tales.
- Reports: Iran was behind "Enemies of the People" website.
- Content moderation at the end of 2020.
Two cautionary tales: CCP influence and jihad claims.
Two stories offer some perspective on alarming news reports.
The first, an essay in Foreign Policy, discusses how Australian news reports took a published list of Chinese Communist Party members as evidence of a new, massive campaign to subvert Australian companies and institutions. The essayist doesn't dispute the threat CCP influence poses, but he does argue that (1) the presence of CCP members in all sorts of companies isn't new, and (2) that it's a threat counterintelligence efforts have long been aware of, and have long been working to contain. So a chronic, low-level threat, and not the sudden onset of an acute crisis.
The other is a Washington Post op-ed that describes the way in which the New York Times was evidently hoodwinked in 2014 by bad sources and bad reporting on the Caliphate. It's not the first time news outlets have been hoodwinked by fraud and insufficient fact-checking, and it's not going to be the last.
Disinformation with malign, kinetic intent.
The Washington Post this week ran an unpleasant story about an information campaign Iran mounted earlier this month. The FBI says Tehran was behind an online effort to incite violence against officials in the US who publicly attested to the integrity of the November elections. A website posted names and pictures of current and former officials, prominently including FBI Director Wray and ex-CISA Director Krebs, along with this introductory explanation: “The following individuals have aided and abetted the fraudulent election against Trump.” Associated social media accounts boosted the narrative under the hashtags "#remembertheirfaces" and "#NoQuarterForTraitors." CyberScoop offers some descriptive detail on the site ("Enemies of the People" was its Ibseneque name) which also included "photos and purported addresses of state election officials and employees of a voting equipment vendor." Among the pieces of disinformation was a fake letter represented as being from then-CISA Director Krebs to the FBI's cyber division, and saying that Dominion voting machines had been compromised. The Wall Street Journal reports that the website, which was active in early December, has now been taken offline.
The Islamic Republic has been no friend of US President Trump, whose Administration took a harder line on Tehran than had its predecessor, but that disposition was no obstacle to taking advantage of election controversy. The operative principle seems to be, the enemy of my soon-to-be enemy is my friend, even though he's still pretty much my enemy. Reuters has the routine, expected Iranian denial of any involvement in the imposture. “Iran is not involved in inciting violence and creating unrest in the United States,” Alireza Miryousefi, a spokesman for Iran’s UN mission, emailed Reuters. The representative added, as the denial genre prescribes, an injured tu quoque: “Iran itself is the largest victim of cyber attacks, including Stuxnet, and has always emphasized the need for the establishment of a global mechanism to prevent cyber attacks at the United Nations, and at other international institutions.” So there, FBI.
Content moderation: labor-intensive, and without any obvious path forward.
Leave aside for the moment what looks like a growing appetite for censorship among the bien pensant wherever they're found across the political spectrum. If you know the arc of history, and you read your Marcuse, then OK, you're unlikely to see the challenge here.
But it's a challenge nonetheless. Social media companies in particular continue to grapple with ways of controlling, even with ways of marking, the misinformation and disinformation that crosses their platforms. Manual review is very labor-intensive and by many accounts hard on the reviewers. It's expensive and unreliable at best, fraught with possibilities that run from permissive inattention through honest mistakes, and implicit bias to explicit bias. There's no epistemological engine that will distinguish truth from lies, or even truth from error.
These issues arise along at least two continua, one a continuum of harm, the other a continuum of error. Some error seems as harmless as it is implausible—is there any real evidence that flat-earth cranks are doing anyone (beyond perhaps themselves) any harm? Some is immediately dangerous, like incitement to riot and murder accompanied by lurid propaganda. Social media occupy an increasingly uncomfortable position between publishers and public squares, and, while it's hard to find bright lines in continua, 2021 may see governments in many places seeking to draw such lines. Whether they'll be able to do so in a way that effectively serves public safety without doing violence to civil liberties remains to be seen.
One indirect approach to effectively countering disinformation has been the detection and exposure of coordinated inauthenticity: it's not what they're saying, but rather that the people talking aren't who they say they are.
For recent responses by Instagram, Twitter, and Facebook to demands that they suppress error, follow the links.