At a glance.
- Disinformation directed at Qatar.
- Transparency as a hedge against amplification of inauthenticity.
- Disinformation exploiting information.
- Beijing's censorship of Zoom.
- An alternative to truth and falsity.
A regional disinformation campaign.
AFP outlines an ongoing disinformation campaign against Qatar. It’s the latest round in a regional dispute that goes back to 2017, when Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt cut ties with Qatar over that country’s alleged closeness to Iran, and thus to Tehran-backed Islamist groups. The recent disinformation includes social media posts that claim a violent coup d’etat was in progress in Doha, complete with grainy video of machine gun fire, etc. Some of this stuff came from social media accounts that just popped up, no followers, no nothing. None of the corroborative detail one expects would lend verisimilitude to an otherwise bald and unconvincing narrative.
It’s interesting that AFP calls their story a fact-check. It seems to be just straight-up good reporting, but fact-checking now seems to have a cachet among those who struggle with disinformation and fake news. Perhaps that’s fair enough, since it’s meta-reporting, that is, reporting about reporting.
Transparency, not censorship.
One aspect of influence operations has been the interplay between state-run news outlets, troll farms, and useful marks who more-or-less uncritically accept and amplify the lines the state operators are pushing. Facebook has for some time enjoyed success in identifying and blocking what Menlo Park calls “coordinated inauthenticity.” The social network is now beginning to address authentic media whose viewpoint might be determined by their government controllers.
Facebook announced some months ago that it would begin labeling accounts run by state-controlled media. This long-anticipated labeling began last Thursday. The labels appear in the "Ad Library Page view, on Pages, and in the Page Transparency section."
Facebook is looking specifically for outlets that are “wholly or partially under the editorial control of their government.” Thus Sputnik and RT would get the “Russia state-controlled media” label, and China Daily gets the controlled-by-you-know-who label. The Verge explains Facebook’s new policy as one of “including information about their ownership and funding, the level of transparency around their sources, and the existence of accountability systems like a corrections policy.”
Thus simply being government funded doesn’t make you state-controlled. The BBC presumably would get a pass for editorial independence, as would Radio Free Europe | Radio Liberty.
Lies' bodyguard of truth, again.
US Attorney General Barr this week said, in brief remarks about ongoing civil unrest, that "We are also seeing foreign actors playing all sides to exacerbate the violence." He did not offer specifics. The social media study group Graphika, however, independently described influence campaigns by Russia, China, and Iran, all of which seek to further their agenda by, respectively, drawing attention to fissures in American society, discrediting US criticism of human rights violations, and undermining the legitimacy of US-led sanctions.
This particular influence campaign doesn’t seem to be marked, at least not yet, by the characteristic troll farming inauthenticities that became the distinctive stigmata of earlier Russian influence campaigns.
Suppressio veri.
The Global Times reports that the new national security law for Hong Kong is only about a month away. The measure is widely regarded as marking the end of the one-nation, two-systems arrangement that's prevailed since the UK handed Hong Kong over to Chinese rule in 1997. The new, harder line is already being felt. Zoom cancelled accounts of two US-based critics of the Chinese regime "to comply with local law" (that is, Chinese law) Axios writes. According to the Washington Post, Zoom has come under widespread criticism for acceding to Beijing's required censorship surrounding discussions of Hong Kong's future and the memory of 1989's Tiananmen Square protests (and their suppression).
Not truth or falsity, but rather benefit and harm?
WIRED continues its coverage of disinformation with an essay that proposes self-moderation during times of unrest, pandemic, and cultural conflict. The touchstone of deciding whether or not to communicate (share, repeat, pass on) information might not be whether that information is true, but rather whether that information is hurtful, with hurt assessed against a scale of marginalization. The essayist argues that taking this approach has three things at least to commend it: "It’s fast, it’s straightforward, it’s binary." She doesn't say this, but one might add that it's easier to automate content moderation with well-defined criteria of harm and marginalization than it is to automate fact-checking. In any case, the argument would replace truth value with a different value. Not "true-false," but the different binary "harmful-beneficial."