At a glance.
- Facebook takes down coordinated inauthentic accounts.
- Recent themes in Russian disinformation.
- Information laundering as a disinformation technique.
- False reports of Russian voter database hacking.
- Countermeasures: potentially viewpoint-neutral content moderation and tools against deepfakes.
Facebook's August takedowns.
The social network identified three networks of coordinated inauthentic accounts.
- "Russia: We removed a small network of 13 Facebook accounts and two Pages linked to individuals associated with past activity by the Russian Internet Research Agency (IRA). This activity focused primarily on the US, UK, Algeria and Egypt, in addition to other English-speaking countries and countries in the Middle East and North Africa. We began this investigation based on information about this network’s off-platform activity from the FBI. Our internal investigation revealed the full scope of this network on Facebook."
- "US: We removed 55 Facebook accounts, 42 Pages and 36 Instagram accounts linked to US-based strategic communications firm CLS Strategies. This network focused primarily on Venezuela and also on Mexico and Bolivia. We found this activity as part of our proactive investigation into suspected coordinated inauthentic behavior in the region."
- "Pakistan: We removed 453 Facebook accounts, 103 Pages, 78 Groups and 107 Instagram accounts operated from Pakistan and focused on Pakistan and India. We found this network as part of our internal investigation into suspected coordinated inauthentic behavior in the region."
The Stanford Internet Observatory characterizes the goal of the Pakistani operation as countering criticism of either Islam or Pakistan’s government. The Russian activity was marked by plenty of QAnon and COVID-19 chatter. Graphika says much of this network’s activity involved redirection to Peace Data, which represents itself as a progressive, independent news service (and which has denounced reports that it's a destination for troll farmers as "slander" pushed by "corporate media"). In some respects the takedown of CLS Strategies' accounts and pages was the most interesting. BuzzFeed reports that CLS Strategies didn’t respond directly to a question about coordinated inauthenticity, beyond briefly stating its corporate mission. The line the accounts took were in Venezuela pro-opposition, in Bolivia pro-regime, and in Mexico anti-MORENA (MORENA is a leftist political party). Facebook did note that CLS as a whole wasn’t banned, since much of the firm’s activity was legitimate. There's been no word on whose behalf the CLS campaigns may have been mounted.
Themes in recent Russian disinformation, as seen in the Facebook takedowns.
Much of the Internet Research Agency's line, as evident in the material Facebook has removed, is striking for its nominally progressive bent. The Telegraph reports that many of the inauthentic accounts established with false personae (and faked pictures) pushed scathing leftist attacks on Tory politicians and policies, but it, like most other media outlets, reads this as targeting of Labour supporters in ways calculated to divide progressives into mutually suspicious factions, and so its concrete tendencies are actually pro-Tory. Much the same may be seen in the US, where the production of Peace Data by no stretch of the imagination Republican, conservative, or even populist, but which has the effect of fissuring what might otherwise be a progressive movement that could line up behind Democratic Presidential nominee Biden. Although it's well to the left of the Biden-Harris campaign's public commitments, it does chime nicely with many views in circulation among, for example, the grudgingly loyal supporters of Senator Sander's unsuccessful campaign. Thus, again, the concrete tendency the New York Times sees is objectively pro-Republican.
Some will no doubt object that any Russian activity whatsoever would be read as pro-Johnson or pro-Trump by a hostile media establishment, but these views aren't unreasonable. If one bets on form that the Russian organs are fundamentally interested in negative disruption as opposed to positive persuasion, then on purely opportunistic grounds such efforts make a good deal of sense. Graphika's report on this round of IRA influence operations observes that the campaign is smaller, more carefully targeted, and quieter than the large-scale efforts deployed in earlier elections.
Add information laundering to amplification as a tool of influence.
The New York Times points out that Russian use of Peace Data and its contract writers to push stories it wishes to get out there is an example of "information laundering." This concept is not new. The Center for Strategic and International Studies formally described its Russian modality under that name in 2017, but the practice goes back much farther than that.
According to the Times, the Russians succeeded in getting actual Americans to write for Peace Data, which would account for the relatively good idiomatic control on display in its posts. The Times says the Internet Research Agency posted offers for freelance writers on a job board. The Times also says it spoke to one such freelancer who was steered to Peace Data by an IRA job board. The writer asked to remain anonymous because he didn’t wish his professional reputation damaged by his having been duped by the Russian government. He was paid seventy-five dollars a post, which relatively speaking is chickenfeed in the freelance market.
So in this case the Russians appear to have made use of the usefully gullible. The content on Peace Data’s site, which the Times believes to have been designed to harm the candidacy of Democratic nominee Joe Biden by fomenting dispute within what might otherwise be a more disciplined left, contains complaint that the Democrats are insufficiently progressive on various issues and denunciation of alleged Republican closeness to unsavory far-right elements. When President Trump appears on Peace Data’s pages, it’s complete with horns, hooves and tail (metaphorically speaking) so if the Times is right, it’s a relatively sophisticated propaganda gambit.
Voter database compromise claims come to nothing.
Sometimes, it's worth noting, you can achieve an effect by reputation alone, hardly having to lift a finger. A good example of this is the chatter about Russian compromise of US voter databases that blew through Twitter at the beginning of the week. This has come to nothing—CISA and the FBI haven’t seen anything of the kind during this election cycle—but it upset many for a couple of days and exacted a toll in public affairs and rumor control effort at least.
On Tuesday the Russian-language newspaper Kommersant aroused a Twitter flurry with a report that “data” on 7.6 million Michigan voters as well as millions of voters in other states, Connecticut, Arkansas, Florida and North Carolina, had appeared on Russian dark web sites. The data were said to include name, date of birth, gender, date of registration, address, postal code, e-mail, voter identification number and polling station number.
But, as Dmitri Alperovitch tweeted in an update, there’s probably a lot less here than meets the eye, since in many states all of that information is considered a matter of public record, and can be supplied in response to ordinary information requests.
One aspect of Kommersant’s story is interesting. It says that the dark web hoods with the data on their hands were thinking of turning the information in to the US State Department in exchange for a payout under the Rewards for Justice Program. We doubt that would work, but give the hoods credit for thinking outside of the box.
Facebook has successfully hunted coordinated inauthenticity and less successfully (but more controversially) attempted content moderation. A recent alteration to the platform's terms of service CNET noticed puts users on notice that Facebook reserves the right to exclude content that would cause it run afoul of regulatory risk, which at least pushes the decision to remove material back onto law or government regulation. The Wall Street Journal reports that Facebook intends to stop taking new political advertising during the week before this November's election, and that it intends to flag any candidate's premature claims of victory. Protocol says that Facebook is also conducting research into how it may have influenced the election, but, of course, that research won't be complete until after the election's over.
As deepfake technology improves, proliferates, and becomes increasingly commodified, MIT Technology Review points with alarm to the ways in which memers (people who try to come up with Internet memes likely to be passed on, to go viral, as they say) are using deepfakes in disturbingly engaging ways. Want to see Celebrity X say something in a conversation with Celebrity Y, even though X and Y have never said those things, or even met? Don't worry, the Internet will provide. Microsoft has released its Video Authenticator, which Redmond hopes will be able to expose and thereby contain the use of deepfakes for political influence and disinformation. The system was developed using the public dataset from Face Forensic++ and was subsequently tested against the DeepFake Detection Challenge Dataset. It seeks to identify subtle clues that an image has been altered. It will be interesting to see how well it performs, and how well it avoids both false positives and false negatives.