At a glance.
- State-controlled media's contribution to influence operations.
- Amplification of messaging.
- Governments push for faster, more effective content moderation.
- Content moderation in social media.
- Augmenting human decisions about content.
- Misinformation about COVID-19 and 5G.
Examples of state-controlled media outlets' contribution to influence operations.
One of the tactics the US Department of Homeland Security and the FBI recently warned against in the context of threats to elections was the likelihood that operators would use "state-controlled media arms to propagate election-themed narratives to target audiences." That's observable in other areas as well, and as usual lies get their customary bodyguard of truth. Moscow-run RT, for example, is running a story that criticizes former British intelligence officials for using "the revolving door" to make a pile of money fear-mongering about Russian cyberattacks when they would have been better employed paying attention to the developing pandemic.
What's the bodyguard of truth? That retired intelligence officers go to work in the private sector. What's the misdirection? That they should have been watching out for incipient epidemics. What's the lie? That warnings about Russian cyber operations are nothing more than fear-mongering, because Russia's a good citizen of cyberspace. What's the goal? Disruption: incitement of mistrust and resentment.
Other examples may be found in Sputnik, the Kremlin outlet that features interviews with American critics of American society and government. Sputnik is interesting in that it comes across as old-left in its orientation, but updated for 2020, as if the Cold War had never ended and the Berlin Wall never fallen, which suggests a curious continuity of messaging that's unlikely to have persisted if it didn't have a payoff. The narratives are familiar ones of unmasking, how late-stage capitalism exploits workers and pushes falsehoods about COVID-19, and so forth. Again, Russia has not been an officially communist state for decades, and it's got more than its fair share of billionaires, but old affinities among useful Westerners persist, and Moscow seems not to be shy about making what use of them it can.
Amplification of state messaging.
At the end of last week Twitter took exception to the US State Department's contentions that the Chinese government had operated a coordinated disinformation campaign in Twitter. CNN says that Twitter looked at the accounts the State Department's Global Engagement Center forwarded them, and sees more ambiguity there than did State. Twitter concluded that the accounts "belong to government entities, nongovernmental organizations, and journalists," and that their investigation is continuing.
Bellingcat offers a look at what it characterizes as a Chinese information operation, the #MilesGuo botnet, active in both Facebook and Twitter. Much of the content the network distributes is directed against Guo Wengui, whom Bellingcat describes as "an exiled Chinese businessman residing in the United States," and who is a critic of the Chinese government and, recently, its handling of the COVID-19 epidemic.
More governments want faster, more extensive content moderation.
A law enacted in France yesterday will give platforms one hour to remove pedophile and terrorist content, on pain of fines up to 4% of a company's annual global revenue, Reuters reports. The news service specifically mentions Facebook, Twitter, YouTube, Instagram, and Snapchat as examples of the platforms that will be affected. While pedophile and terrorist material must be gone within the hour, the law isn't so permissive as to allow everything else in. It's just that companies will have up to twenty-four hours to take down other "manifestly illegal" content. Justice Minister Nicole Belloubet told parliament the law represents a significant step forward in the Republic's fight against hate speech: “People will think twice before crossing the red line if they know that there is a high likelihood that they will be held to account,” she said. It's stiff punishment, but it's not yet clear how brightly drawn that line will in fact prove to be.
The UK is also on the warpath with respect to content moderation, although in Westminster the concerns are more about COVID-19 disinformation than they are about hate speech. ComputerWeekly reports that Minister of State for Digital and Culture Caroline Dinenage, on Tuesday told the Lords' Democracy and Digital Technologies Committee that Her Majesty's Government has very clearly told technology companies that more is expected of them. “We do welcome the steps that social media have taken so far," she said, "but the secretary of state has met with a number of the large platforms recently and been very clear that he expects them to go further and faster to address misinformation and disinformation relating to Covid-19." Going further and faster isn't confined to the pandemic emergency, either. “This has lessons for beyond Covid-19 and into the ‘new normal’ world that we may be facing in the months ahead.”
Content moderation in social media.
Twitter has offered more information on its plans to label COVID-19 misinformation as such, Reuters reports. "Some or all of the content shared in this Tweet conflicts with guidance from public health experts regarding COVID-19," the labels will say. A "Learn more" link will take users to some of that relevant expert guidance. In cases where Twitter judges the misinformation to be particularly risky ("depending on the propensity for harm and type of misleading information in the tweet," as Reuters puts it) the social medium will display the warning before the user views the content. Confirmed misinformation will be labeled, as will certain "disputed" claims.
It appears the false or disputed material will remain available, albeit flagged and linked to contrary views, and this is in keeping with the marketplace-of-ideas approach Twitter appears to have adopted. “One of the differences in our approach here is that we’re not waiting for a third party to have made a cast-iron decision one way or another,” Twitter’s public policy director Nick Pickles said. “We’re reflecting the debate, rather than stating the outcome of a deliberation.” This may be both a quicker and more permissive approach than the more dirigiste content moderation being mulled elsewhere.
That more directive content moderation may be seen in the decisions by YouTube, Vimeo, and Facebook to remove a trailer for a full-length film "Plandemic" that pushes an anti-vaccine conspiracy theory about the origins of, and response to, the COVID-19 pandemic. The Washington Post reports that these platforms have decided the trailer (which at twenty-six minutes' running time itself amounts to a short film) pushes misinformation likely to prove dangerous to those who follow its advice. YouTube says that its policy is to take down “content that includes medically unsubstantiated diagnostic advice for covid-19” (like the “Plandemic” trailer). Facebook's rationale was more specific: "Suggesting that wearing a mask can make you sick could lead to imminent harm, so we’re removing the video.” Vimeo said it was “keeping our platform safe from content that spreads harmful and misleading health information. The video in question has been removed by our Trust & Safety team for violating these very policies."
"Plandemic" features fringe scientist Dr. Judy Mitkovits, who the Washington Post says has been associated with discredited research before. Among the film's claims is the assertion that the wealthy have deliberately worked to drive up infection rates in order to increase vaccination rates. Before it was taken down from Facebook at the end of last week, the "Plandemic" trailer had, Digital Trends reports, attracted “1.8 million views, including 17,000 comments and nearly 150,000 shares."
Does it scale?
Content moderation has proven notoriously labor-intensive, and that labor has the reputation of being stressful. The Verge reports that Facebook this week agreed to settlement of a lawsuit in which it will pay some $52 million to current and former content moderators who suffer from post-traumatic stress disorder (PTSD) as a result of their work. 11,250 moderators are covered by the settlement, and those ultimately found eligible will receive at least $1000, with additional compensation dependent upon their diagnosis.
The PTSD is said to have been induced by the horrific content the moderators had to review, including graphic depictions of violence. Facebook, which will continue to use human moderator to review content, has taken several steps to reduce the impact of that content, including changing images from color to black-and-white, and to muting audio by default. Menlo Park will also make counseling and group therapy available to moderators who must work with disturbing content.
But work on automated tools for content moderation continues, since the pressure from various governments to provide such moderation remains high. Facebook is deploying new artificially intelligent systems to "detect COVID-19 misinformation and exploitative content." The AI is not used as a detector or a screen. Rather, it takes the decisions made by human content moderators and applies them to relevantly similar cases as it scans the network. Thus it's a tool for augmenting the results of human decision, not replacing human decision itself.
The Righteous and Harmonious Fists, 21st century edition, make an appearance Stateside.
The luddites and crazies who've been trashing cell towers in the UK, Belgium, and the Netherlands because they've heard that 5G causes coronavirus have inspired their conspiracy-minded soulmates in the States to take similar action, and all we can do is wonder why it took everybody so long. There have now been incidents reported in the US, and the Washington Post says the US Department of Homeland Security is working on an advisory and a plan to help telcos protect their equipment.
It's probably useful to distinguish disinformation, misinformation, and fraud. Disinformation involves deliberate deception, usually performed by a state actor. Misinformation is error, communicated in the belief that it's actually or at least possibly true. And fraud, like disinformation, is deliberate, but usually perpetrated by a criminal actor. Both disinformation and misinformation aim (insofar as misinformation can be said to have an aim) at inducing belief. The belief fraud engenders is always induced in the service of some other criminal purpose, typically theft. They can overlap, providing mutual reinforcement and opportunistic advantage. Of the three misinformation is the least intentional.
The Post mentions disinformation in their coverage of the cell tower damage, but the vandalism seems more likely to have its origin in misinformation. The attacks also provide a discouraging case study of rumor convergence, the strange bedfellows passionate commitment to a cause can make, the reach of influencers, and the sad futility of much rumor control. (hina's Boxer Rebellion of 1898-1901 was sparked in part by the spread of fears surrounded the then-novel telegraph networks, and the fears organized themselves into the fighting society of the Righteous and Harmonious Fists, the "Boxers" in international usage.)
One wonders how much the use of "virus" for both a class of pathogen and a kind of malware have contributed to the popular mania. “It is physically impossible that electromagnetic fields transfer particles like viruses,” the Post quotes Eric van Rongen, of the International Commission on Non-Ionizing Radiation Protection. But of course, they coulda maybe transferred them computer viruses yinz read about in the Google, right? Lest that last sentence suggest there's a class or a geographical angle to these new Righteous and Harmonious Fists, it's worth noting that someone could equally well ask whether late-stage capitalism doesn't interrogate a praxis of hermeneutics that unmasks the 5G problematic. Hey—stands to reason; do your own research, sheeple. And so on.
Some of the attacks, sources say, may have been acts of ecotage taking opportunistic advantage of the pandemic to damage counter-to-nature infrastructure. And there's been no shortage of celebrity influencers sharing the dope that 5G causes COVID-19: the light-welterweight boxer and philanthropist Amir Khan, the singer Anne-Marie ("Ciao Adios" and "Rockabye," among other hits), and the actor Woody Harrelson (known for Cheers and Zombieland) have been particularly mentioned in dispatches. (For our part we're going with Mr. van Rongen over Mr. Harrelson.) And it's dismaying if not unexpected to see how the impulse to do damage like this can be well beyond the reach of rumor control. The Federal Emergency Management Agency and others have tried, but with apparently indifferent success. It's as difficult to persuade the Righteous and Harmonious Fists in the Twenty-First Century as it was in the Nineteenth and Twentieth.