At a glance.
- Sputnik claims collusion between US, UK, France, and ISIS.
- Commercial content moderation.
- Moderation by checklist.
Minor Russian disinformation.
We're in a relatively quiet period of active, state-directed disinformation. (Active state censorship, deception by suppressio veri, is still very much in play, notably in China and Myanmar.)
But one minor Russia effort bears mentioning: Kremlin mouthpiece Sputnik reports that "Western intelligence services and the Daesh Takfiri terrorist group have agreed to launch terrorist attacks in war-ravaged Syria following a series of meetings, according to an informed media source." Thus the claim is that Western governments are conniving with ISIS in Syria to arrange the destruction of Syrian government and Russian forces on the ground, along with armed groups said to be loyal to Iran. Churches, mosques, and other places of worship, which are also said to figure in the target lists. British, French, and US intelligence services are said to be in cahoots with unnamed "regional countries."
Sputnik sources its report to its own Arabic language service, which would seem to count as a degenerate form (in the geometrical sense of the word) of amplification. It's like showing someone a second copy of the same newspaper to confirm the facts reported in the first one.
Commercial dis- and misinformation: error, fraud, libel, slander, and ballyhoo.
The BBC reports that Trustpilot, a Danish firm that specializes in offering customers an opportunity to review businesses, thinks it's got a handle on one form of commercial disinformation: bogus reviews, whether positive or negative. The company's transparency report outlines ways in which it culls phony reviews. It uses a mix of automated tools, crowdsourced moderation, and human review. "Reviews can be flagged by both consumers and businesses where they:
- "contain harmful or illegal content;
- "contain personal information;
- "contain advertising or promotional content;
- "are not based on a genuine experience;
- "are about a different business (only businesses can report for this reason)."
The automation is interesting, if only because it seems to confirm that certain forms of labor-saving coordinated inauthenticity are easier to recognize and check than are other things that mark problems. "It's very difficult for humans to spot a fake review [unless they are] badly done," Trustpilot's Carolyn Jameson told the BBC. "But the machines look at multiple data points, like the number of times an IP [internet protocol] address has posted a review in quick succession, and patterns in language that might look natural to the human eye but have been repeated too many times in other reviews by the same person."
It's not an epistemological engine, but it seems a useful screen for certain forms of inauthenticity.
Engines of propriety and truth recede...like the horizon...
WIRED has an essay on how various platforms seek to keep things civil by using lists of naughty words, and finds that such lists, attractive because they lend themselves to application without thought, judgment, or context, let alone intensionality and that use-mention distinction, are themselves problematic. One person's Scunthorpe is another person's well, y'know. The essay suggests that a more intersectional approach to censorship would be the answer, instead of concluding that searching for the right list might amount to chasing the rainbow's end.