At a glance.
- Foreign disinformation seems to have been less than fully effective during US elections.
- Domestic misinformation was noisy.
- Beijing impersonations target Chinese Catholics.
- Welcome regulation?
- Dangerous knowledge?
Foreign disinformation around the US elections seems to have had little effect.
An essay published by the Atlantic Council reviews the reasons for foreign disinformation's failure in this US election cycle. Russia, Iran, and China made some attempts to show up, but US countermeasures seem to have been generally effective. Those measures included aggressive Cyber Command intervention in adversary infrastructure and swift and public unmasking of hostile coordinated inauthenticity by the Cybersecurity and Infrastructure Security Agency. Thus troll farming and impersonation seem to have had relatively small effect.
While Russian influence operations during the US elections seem to have fizzled, the Voice of America reports that Moscow appears to be laying the foundations of subsequent campaigns. Instead of troll-farming and inauthentic social media, the new Russian approach to disinformation involves establishing mindshare in fringe US media, far left and far right, using feeds from state-controlled outlets like RT, Sputnik, TASS, and Izvestia TV. One of the pathologies of intense political commitment, apparently, is heightened gullibility.
But domestic conflict surrounding those US elections may have rendered foreign influence superfluous.
There's been some speculation that one of the reasons for the relative lack of success that influence operations enjoyed during this election cycle is that intense domestic partisanship may have produced enough noise to drown out the foreign operations. Some of the domestic misinformation appears to have seen foreign interference where Government investigators have seen nothing.
Speaking on CBS’s 60 Minutes, former CISA Director Krebs was particularly concerned to debunk claims of foreign manipulation of US voting systems and vote counting. He said, "We spent something on the order of three and a half years gaming out every possible scenario for how a foreign actor could interfere with an election… countless scenarios…" There has been one theory in circulation that software used in Dominion Voting Systems was developed in Venezuela under the direction of the late strongman Hugo Chavez, and that such software is designed to corrupt and manipulate US vote tallies. Krebs says it’s all hooey: votes aren’t being counted offshore, and there’s no evidence in either initial counts or recounts that the US election was stolen by any combination of foreign intelligence services or transnational groups. "So again, there's no evidence that any machine has been manipulated by a foreign power, period."
Preaching to the choir (or appearing as an angel of light).
Researchers at Proofpoint have detected a resurgence of Mustang Panda activity. The Chinese intelligence service threat actor has long been active against ethnic and religious minorities. Its current campaign, which features an upgraded PlugXmalware loader written in Golang, is directed against Chinese Catholics. CyberScoop notes that the group is using spoofed email headers purporting to belong to Catholic journalists as part of its phishbait. Mustang Panda’s present efforts represent a resumption of targeting Recorded Future called out in July.
The disinformation here is in the service, first, of social engineering a targeted and disfavored domestic group. In the second instance it's preparation for further influence operations.
Regulation isn't always entirely unwelcome to those being regulated.
Some combination of increased regulation and tougher industry content moderation is increasingly seen, by many, as the right direction for the future of online platforms in general and social media in particular.
Hanoi might be providing a picture of how that future may look, once it’s realized. According to Reuters, Vietnam is threatening to block Facebook if the social network doesn’t knuckle under to Hanoi’s demands for censorship of local political content.
A “senior Facebook official” told Reuters, “We made an agreement in April. Facebook has upheld our end of the agreement, and we expected the government of Vietnam to do the same. They have come back to us and sought to get us to increase the volume of content that we’re restricting in Vietnam. We’ve told them no. That request came with some threats about what might happen if we didn’t.”
The government in Hanoi responded to a Reuters follow-up with the simple statement that social networks should not expect to be able to continue “spreading information that violates traditional Vietnamese customs and infringes upon state interests,” which is one way of looking at it.
Dangerous knowledge?
The Johns Hopkins University published, then removed, an article on COVID-19 epidemiology. The removal was due to what the university characterized as the "misuse" of the paper’s conclusions. It’s perhaps worth asking under what conditions knowledge (and there was no suggestion that the results the article reported were false or poorly supported) can be deemed too dangerous to share, and whose call that might be. There would seem to be a continuum here, ranging from, say, detailed engineering instructions on the production of nuclear or biological weapons on the too-dangerous-to-share side of the scale, down to, say, gardening advice on the what-possible-harm-could-this-do end. Sure, it’s a continuum, and any reasonable epistemology has to account for human fallibility. But if well-conducted epidemiological research is too risky to share, does it follow that, say, democracy needs guidance from the gatekeepers of information? If that’s the case, it’s a democracy Rousseau and Marcuse might recognize, but not too many others.