At a glance.
- Election influence from Moscow, Beijing, and Tehran.
- Intelligence collection and occasions for doubt.
- Countering hostile influence operations against US elections.
- Belarus and the technology of Internet disruption.
- Wildfire conspiracies and other misinformation.
- Challenges of content moderation.
Election influence operations: Russian, Chinese, and Iranian.
Microsoft last week described evidence it’s developed that indicate extensive Russian, Chinese, and Iranian efforts to penetrate or impede US political campaigns. The target selection is about what one might expect, given the three governments’ general policy objectives. The national campaigns break down roughly as follows:
- Tehran really doesn’t like President Trump, at all. The Iranian group Phosphorus (Microsoft uses elemental names for threat actors; others call this one APT35 or Charming Kitten) is hitting personal accounts of people associated with President Trump’s campaign.
- Beijing, on the other hand, seems interested in former Vice President Biden’s campaign for the Presidency. It also wants to keep a close eye on the US foreign policy establishment, probably because of the extent to which American sanctions against and woofing in the direction of Chinese companies have become a thorn in the Pandas’ paws. The Chinese group Zirconium (APT31 or Hurricane Panda) is most interested in “high-profile individuals associated with the election,” including some having to do with the Biden campaign as well as “prominent leaders in the international affairs community.”
- Moscow is looking for opportunistic trouble. Russia’s Strontium (APT28, the GRU’s very own Fancy Bear) has bipartisan interests, and has gone after more than two-hundred targets. Their list runs to campaigns, consultants, political parties, and advocacy groups.
Most of the attacks the groups mounted, Microsoft says, were unsuccessful.
Foreign Affairs has a long essay in its current issue on how Russian influence operations have evolved since 2016. In general, direct troll-farming, while it hasn’t gone away, has fallen from its former position of prominence. It’s the sort of inauthentic behavior that’s just grown too easy to detect. Instead, the operators have done other things, from the low-level grift of persuading people to rent their social media accounts through the establishment of plausible front organizations to the hiring of cynics or useful idiots to write for them.
While Russia has been the leader, other governments have shown themselves willing to learn from the best, and state-run online influence campaigns are likely to become, the essayist argues, a permanent feature of future democratic elections. China, Iran, and Venezuela have already shown their ability to adapt some Russian methods to their own purposes. They haven’t been dull pupils, but their positive objectives are inherently more difficult to achieve than the negative, disruptive goals Moscow has tended to pursue.
Collection, influence, and doubt.
An example of how influence can be pushed through various sites when the content chimes with the site's commitments and biases may be seen in the Grayzone, which comments on a Коммерсантъ report detailing how American voter databases were found on the dark web. It turns out that the databases were, for the most part at least, matters of public record in the states that compiled them. Grayzone suggests that this ought to "call into question the narrative that Russian intelligence 'targeted' US state election-related websites in 2016." Maybe, but that's to construe "targeting" fairly narrowly. It is, of course, possible for intelligence agencies to collect open source intelligence. They do it all the time, and such OSINT is often very valuable.
Consider the ways in which intelligence and marketing data can seem to converge. The Australian Broadcasting Corporation has obtained what appears to be a leaked database showing individuals against whom Chinese intelligence services is developing detailed target profiles. Some twenty-four-million people are on a list maintained by Shenzhen-based Zhenhua Data, believed to be a Ministry of State Security contractor.
The Washington Post’s account of the database focuses on collection of social media posts and other open source intelligence on US military, diplomatic, and government personnel. The Post puts the take at some two-million individuals, an order of magnitude less than ABC’s tally, but then the Post may be counting only the Americans who were targets. ABC explicitly calls out all Five Eyes--Australia, Canada, New Zealand, the United Kingdom, and the United States--as well as Malaysia, as figuring among the countries targeted.
The database is called the OKIDB, for “Overseas Key Information Database,” and it claims to offer insight into the individuals who figure in it, as well as information about their families. That’s chilling, but that’s espionage. It’s not the first time China has collected against friends and family. One of the less commonly remarked features of the 2013 and 2014 compromises of US Office of Personnel Management Data was the extent to which Chinese theft of Standard Forms 86, completed questionnaires people with US security clearances have to fill out, also revealed information about family members, friends, colleagues, and neighbors. Thus it’s not too surprising that the OKIDB would exhibit a similar pattern of collection.
The Post observes that the material may be relatively old, and that it’s not entirely clear that it’s being used by the Ministry of State Security, but that in any case Zhenhua Data calls itself “a patriotic company” and numbers Chinese military and government agencies among its customers. Zhenhua Data’s product may be an aspirational one they hope to sell, or it may be in use. But in any case several lessons might reasonably be drawn from the reports.
The Guardian describes how Canberra-based Internet 2.0 was able to extract information from the Zhenhua Data leaks. Zhenhua maintains that there’s nothing particularly sinister about the database: essentially, it’s marketing data. The Australian government’s reaction to the incident has been subdued, but the Labor Party has called upon the Information Commissioner to open an investigation. Reaction from India’s government has been similarly low-key. Since the information was publicly available, the Economic Times reports, the government’s view is that there’s no question of either surveillance or espionage. So perhaps no crime, but that doesn't mean there's no intelligence being collected, either. (It's the fallacy of denying the antecedent, which is bad logic but sometimes effective rhetoric.)
Countering hostile influence operations in US elections.
CSO Magazine has an account of presentations on US election security delivered at last week’s Billington Cybersecurity Summit. The speakers, for the most part either Federal or state officials concerned with the conduct and security of November’s elections, said they’d seen and continue to see a great deal of influence operations, much of it emanating from Russia, but that there had been little to no evidence of direct compromise of the voting apparatus itself, of the devices and systems that record, transmit, and tally the vote. They also have seen little evidence that deepfakes had been put to much use in attempts to influence the election.
As the US elections approach, General Paul Nakasone, NSA Director and commander, US Cyber Command, said that he’s confident those elections will be “safe and secure.” The organizations he leads have made election security a priority. Their approach has had three main areas of emphasis, MeriTalk says General Nakasone explained at yesterday’s Intelligence and National Security Summit. He phrased these as questions:
- First, “How do we generate incredible insights on our adversaries?”
- “Secondly, how do we share information and intelligence with the lead for our nation’s elections security which is DHS and also FBI?”
- “And the last piece, how do we impose outcomes on any adversary that attempts to interfere within our democratic processes.”
And, again, he’s confident that they have these areas under control. But influence operations, General Nakasone said, are the “great disruptor,” and they’re here to stay. CyberScoop quotes him as saying, “We’ve seen it now in our democratic processes. I think we’re going to see it in our diplomatic processes, we’re going to see it in warfare. We’re going to see it sowing civil distrust in different countries.”
Belarus and the technology of Internet disruption.
Bloomberg Law reports that US security company Sandvine, citing the abuse of its products by the Belarusian government in shutting down Internet access through most of the country during a disputed election, has said it will no longer do business with Minsk. The Voice of America’s Russian-language service has an account of how the Lukashenka regime made of Sandvine’s technology.
That technology was developed by Sandvine to block terrorist threats and dissemination of child pornography, but when it was sold to Belarus in May the authorities had other ideas, and turned it against social networks and other online channels through which dissent might be organized.
Wildfires and the spontaneous generation of ideological misinformation.
The FBI said last Friday that it had investigated reports that Oregon wildfires had been set by “extremists” and determined them to be completely unfounded. Wildfires are endemic on the Pacific Coast, and while this year’s round has been unusually unpleasant, there’s no evidence that the fires have been deliberately set. While scare stories in circulation have imputed the arson that wasn’t to all varieties of extremists left, right, and center, a preponderance of misinformed suspicion has been directed toward Antifa, possibly because of the leftist group’s alleged involvement in incendiarism during some urban rioting. But again--that’s urban, and on a smaller, Molotov cocktail scale than a coastal wildfire would be.
Gizmodo reports that Facebook, where much of the misinformation has landed, began taking measures Friday to stop the spread of this particular rumor, but according to Axios the social network has met with limited success. Most, but not all, of the unfounded wildfire speculation has been on the political right or center. But the Washington Post reports a general increase in memes from the left. These have been for the most part calls for violent action as opposed to dissemination of misinformation. Such fissure, left or right, are ready opportunities for foreign disinformation campaigns.
The general challenge of content moderation.
Content moderation, for all the technological adjuncts a platform may bring to bear, remains a labor-intensive business. BuzzFeed has an account of recollections a fired Facebook data scientist wrote about her work on content moderation while she was employed there. While detecting inauthenticity seems easier than distinguishing truth, lies, ambiguity, metaphor and error, apparently even inauthenticity can be tough to discern. The data scientist, Sophie Zhang, says she worked for two-and-a-half years on Facebook's "fake engagement team." Her large conclusion is cast as a confession: "I have blood on my hands." “I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry," she explained, "and caused international news on multiple occasions. I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count.”
The impression her memo leaves is not so much one of callous indifference, still less extensive and self-conscious corruption, but simply one of a system that's overwhelmed with the work it faces.
If you've lost the Kardashians, you've lost the Internet?
President Johnson is supposed to have said something like that about Walter Cronkite and America during the Vietnam War. Kim Kardashian is leading a celebrity freeze on Facebook and Instagram accounts until the platforms do a better job of controlling expression of hate and in particular incitement to violence. She's concerned as well with the effect irresponsibility in social media is having on American culture and elections. Axios reports that the #StopHateForProfit is likely to be bad publicity for Facebook (and, indeed, Seeking Alpha notes that the social network's share price dropped briefly on the news).