At a glance.
- Confidence in the US 2020 Census could become an influence ops target.
- Facebook identified and removed coordinated inauthenticity.
- The FBI adds a unit to combat Chinese influence.
- Constructive disinformation is tougher than destructive disinformation.
Census 2020: an attractive target for disinformation.
A US Government Accountability Office assessment warns about aspects of the Census Bureau’s preparation for the 2020 US census. While the GAO found the Bureau to be working toward an effective count, their study also found that the Census Bureau was having difficulty meeting milestones for IT testing and cybersecurity assessment. The GAO would like to see the Census Bureau implement the cybersecurity recommendations its received from the Department of Homeland Security over the past two years.
Of particular concern is the possible vulnerability of the census to both hacking and disinformation. These worries have, Federal News Network reports, prompted worry in the US House of Representatives that the census could prove to become a large-scale version of the Iowa Democratic caucus, and a version that would have larger and far more enduring consequences.
Some of those concerns are probably overwrought--for example, the Census Bureau told Congress that it’s satisfied that the multiple cloud backups it’s arranged lend it sufficient resilience to recover from an attack that affected the data it collects--and the Bureau has certainly devoted far more time and attention to development, testing, and deployment than the Iowa Democratic Party was able to bring to Shadow’s IowaReporterApp. But there has already been mistrust expressed toward the census, particularly over concerns by some that the count could be used to expose undocumented aliens to identification and deportation.
Nevertheless Congress and the GAO seem likely to keep a close eye on the 2020 census. Since mistrust is relatively easy to foment, and since the census plays an important part in allocating both Congressional representation and electoral votes, the decennial count would appear to be a particularly attractive target for Russian influence operators.
Facebook takes down campaigns of coordinated authenticity.
Facebook this week removed inauthentic accounts that were functioning in a coordinated fashion. The accounts emanated from Iran, Russia, Myanmar, and Vietnam, and they ran afoul of the social network’s rules against both coordinated inauthenticity and foreign government influence campaigns.
Russian activity focused on the Near Abroad, the former Soviet Republics in Russia’s backyard, and especially on Ukraine. Menlo Park took down seventy-eight Facebook accounts, eleven Pages, twenty-nine Groups and four Instagram accounts that it found were in violation of its policy against foreign or government interference. Many of the operators behind these engagements represented themselves as citizen-journalists and sought contact with regular media or public officials, but Facebook said they found signs that all of them were connected with Russian military intelligence services, the same people behind, of course, our old animal friend Fancy Bear.
The campaigns from Myanmar and Vietnam included thirteen Facebook accounts and ten Pages. It was focused on Myanmar, with activity originating in both that country and in Vietnam.
And finally Facebook removed six Facebook accounts and five Instagram accounts in a small network operated from Iran that focused mostly on the US.
None of these campaigns appear to have had particularly large followings. The Iranian operators, for example, had only about sixty followers, which would be the shame of even a modestly popular middle-schooler. They do, however, show that social media remain particularly attractive to state intelligence and security services.
The takedown also suggests that Facebook’s concentration on inauthenticity continues to represent a relatively credible approach to controlling influence operations. Fact-checking is much tougher. It’s not only hard on the checkers, who have tended to find their boiler rooms so unpleasant, so unremitting in the exposure to unpleasant content they force on the employees, but it’s also labor intensive--no epistemological engine that could automate the process is in prospect--and difficult to scale. Some operations may be better suited to fact-checking than others, however, and Facebook has contracted with the news service Reuters to provide that service to the social network.
The Bureau adds a unit to track and counter Beijing’s influence operations.
Axios reports, in an exclusive, that the US FBI has “quietly” formed a unit that will specialize in fighting Chinese influence operations. The mission is a familiar one, and has much less in common with the counter-propaganda, deterrence, or rumor-control work so often contemplated as a response to Russian disinformation campaigns than it does with traditional counter-espionage. China aims at creating long-term relationships and dependencies, recruiting spies, collaborators, and agents of influence. The FBI has been familiar with that sort of counter-espionage work since the earliest days of the Cold War.
Constructive versus destructive influence operations.
The Iranian campaign Facebook took down this week is interesting in both its tactics and strategy. Some of the accounts cancelled were flagged by FireEye in its continued tracking of the influence operation the security firm calls “Distinguished Impersonator.” The company began publicly tracking Distinguished Impersonator in May of 2019, when they found Iranian operators, assuming “personas” that impersonated US Congressional candidates or used fabricated personas that represented themselves as journalists. The goal of that activity was to solicit interviews intended to, as FireEye puts it, “bolster desired political narratives.” Distinguished Impersonator has remained in business ever since.
Thus Tehran’s influence operations resemble those of Beijing in that they have a positive and constructive goal (positive and constructive, that is, from the operators’ point of view). They seek fundamentally to persuade, not to confuse.
Consider the contrast with Russian influence operations, which are fundamentally disruptive, and disruptive in an opportunistic way. A former inmate of the Internet Research Agency, the notorious St. Petersburg troll farm, told the independent Russian news service Dozhd that their goal was a purely negative one: "Our task was to set Americans against their own government, to provoke unrest and discontent." And the trolls weren’t dull pupils: they became close students of American politics and culture.
Confusion is easier to induce than conviction. It’s as if there’s a second law of information analogous to the second law of thermodynamics: confusion increases over time, and persuasion comes only with additional work. This seems especially true when the target inflicts their own wounds: cf the Iowa Democratic caucus.
Deepfakes, shallowfakes, satire, and argument.
While there are no easy ways of exposing the more sophisticated forms of disinformation, researchers are working on ways that show some promise of at least screening for them. These usually involve, as seen in some recent promising work at Purdue University, looking for anomalies that betray alterations in video, audio, or images. Purdue’s research concentrates on video, but there’s also interest in developing methods of screening other media.
There are, however, no purely technical comprehensive solutions in prospect, nor is there any algorithm that can arrive reliably at ground truth. As Red Goat Cyber Security’s Lisa Forte told the CyberWire this week, the best counter to disinformation is well-informed human skepticism.