At a glance.
- Twitter takes down coordinated inauthenticity.
- TikTok's hybrid approach to com unity moderation.
- Coalition for Content Provenance and Authenticity seeks to set standards for news.
- Knowing the enemy as equivalent to knowing.
- Influence operations as seen from the Kremlin.
Twitter takes down coordinated inauthentic accounts.
SecurityWeek reports that Twitter has taken down three sets of coordinated, inauthentic accounts that separately pushed narratives in the service of Iranian, Armenian, and Russian interest. Twitter characterized the takedowns as "disclosing networks of state-linked information operations."
The Iranian influence operation was principally interested in issues surrounding the US Presidential election. Based on tips Twitter began receiving from the US FBI in October, the platform "suspended a total of 238 accounts operating from Iran for various violations of our platform manipulation policies. As previously stated, the accounts had low engagement and did not make an impact on the public conversation. Today, we’re adding these accounts to the archive to empower independent research and analysis."
Thirty-five accounts linked to the government of Armenia were also suspended. Those had a more narrowly regional interest, and pretended to represent political figures and government officials in neighboring Azerbaijan. Some of them also misrepresented themselves as Azerbaijan news agencies. These, too, Twitter took down for violation of its platform manipulation policy. (As a bonus, these bogus accounts also "engaged in spammy activity to gain followers and further amplify this narrative" unfavorable to Armenia's rival, Azerbaijan.)
Finally, Twitter took down two distinct networks run by Russian operators. Sixty-nine fake accounts were "reliably tied to Russian state actors." This crew had two interests: boosting the Russian government and undermining confidence in NATO. The second takedown addressed thirty-one accounts from two distinct networks that were assessed as being run by the Internet Research Agency, a notorious troll farm based in St. Petersburg.
It's worth noting, again, that action against coordinated inauthenticity, against operators claiming to be who they aren't, seems less fraught with civil libertarian problems than are content-based attempts at moderation.
TikTok takes a mixed approach to moderation.
In its Transparency Report rendered Wednesday to cover the second half of 2020, TikTok describes a hybrid approach to community moderation. On the one hand it removed three-hundred-forty-thousand videos on the grounds that they contained "election misinformation," a clear judgment about content. In sum, four-hundred-forty-one-thousand videos were yanked for pushing misinformation. On the other hand TikTok also took a shot at inauthenticity, at least insofar as it engaged in apparent bot-hunting: it shut down 1,750,000 accounts used “for automation.”
It's striking how the platform has grown. The Transparency Report says that "In the second half of 2020 (July 1 - December 31), 89,132,938 videos were removed globally for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of those videos, we identified and removed 92.4% before a user reported them, 83.3% before they received any views, and 93.5% within 24 hours of being posted."
"Provenance and authenticity."
Microsoft describes a more general approach to moderation that seems closer to the exposure of coordinated inauthenticity than it does to direct suppression of content deemed to be false. Redmond has teamed up with the BBC, Adobe, Arm, Intel, and Truepic to organize an industry standards-setting group, the Coalition for Content Provenance and Authenticity (C2PA). Microsoft explains C2PA's goal as follows: "C2PA member organizations will work together to develop content provenance specifications for common asset types and formats to enable publishers, creators and consumers to trace the origin and evolution of a piece of media, including images, videos, audio and documents. These technical specifications will include defining what information is associated with each type of asset, how that information is presented and stored, and how evidence of tampering can be identified."
Knowing your enemy as a surrogate for knowing...
The German Marshall Fund has an account of the Texas ice storms and their attendant power outages as a case study in how a "disinformation supply chain" works. Many outlets, for the most part right- or center-right-leaning, picked up on claims that Texas wind-power generation systems were uniquely vulnerable to cold-weather disruption, and that it was the failure of wind farms that was responsible for the grid's failure. This wasn't the case: there is wind-generated power in Texas, but it represents a relatively small fraction of the state's capacity. It's unclear from the account where the notion originated or why, but whether it was disinformation or misinformation, that the narrative spread quickly to receptive minds ready to receive it seems clear enough.
A disposition to take political provenance as a touchstone of truth, or at least acceptability, has been on display elsewhere, in this case on the left and center-left. Vice reports, with approval, a movement on behalf of some Congressional Democrats to pressure carriers into deplatforming entire media outlets for treating news stories in what the members consider ill-favored ways. The Dispatch calls it "Fox hunting," and argues that there's no "fake news" exception to the First Amendment (scare quotes in the original). But as the example of Vice suggests, there may not be a critical mass of First Amendment absolutists in the Third Estate, which seems, even now, surprising.
Russia discerns hostile influence ops.
Military Times reports that President Putin told a meeting of FSB officers that Russia was the subject of an effort by unnamed foreign powers (but it's not difficult to see that Mr. Putin is looking at you, Uncle Sam, John Bull, Marianne, etc.) to “derail our development, slow it down, create problems alongside our borders, provoke internal instability and undermine the values that unite the Russian society.”
But Russia can chalk up at least one win in the campaign for mindshare: RT reports that Amnesty International has downgraded imprisoned dissident Alexander Navalny from "prisoner of conscience" to mere prisoner because Navalny said some things in the mid-2000s against Muslim immigrants which Amnesty has investigated and deemed to be "hate speech," and thus not entitled to protection. Specifically, Amnesty says Mr. Navalny “advocated violence and discrimination," and has refused to recant. In Amnesty's view, this form of error at least has no rights.