At a glance.
- COVID-19 disinformation campaigns: Russian, Chinese, and Iranian.
- US Senate Select Committee on Intelligence reports on the Intelligence Community's assessment of Russian active measures during the 2016 US elections.
- Twitter will block tweets that distribute harmful misinformation about the pandemic.
- Arrests in India of people accused of spreading COVID-19 misinformation.
- Contact tracing for COVID-19 misinformation.
- Pandemic disinformation, astroturfing, and political advocacy.
- Google blocks malicious coronavirus-themed emails.
- Anarchist influence operations surface during the pandemic.
US State Department report describes converging COVID-19 disinformation campaigns.
POLITICO has reviewed a report by the State Department's Global Engagement Center that concludes three governments—those of Russia, China, and Iran—are pushing complementary lines of disinformation:
- COVID-19 is an American bioweapon.
- The US is making political capital from the pandemic.
- The virus did not originate in China.
- US Army troops spread the virus.
- US sanctions are killing Iranians during the pandemic.
- China responded to the crisis effectively and responsibly, but the US response was marked by negligence.
- Russia, Iran, and China are handling the pandemic well.
- The US economy cannot withstand the toll COVID-19 is exacting.
The false stories are being distributed by a mix of official, semi-official, and cooperating outlets. Some of the official outlets aren't shy about disseminating surprisingly tabloidesque stories: a Russian military paper Zvezda, for example, in March began retailing the story that the novel strain of coronavirus was developed by the Bill and Melinda Gates Foundation, an unspecified secret laboratory, and a cabal of pharmaceutical companies. Their goal was evidently profit. (This particular accusation, facially preposterous, was nonetheless picked up by "unknown activists," the Washington Post reports, and distributed through 4Chan.)
Zvezda added a further dimension to the Gates Foundation conspiracy story with the manifestly false claim that the virus is known to be racially targeted. POLITICO quotes Zvezda: "It is noteworthy that the famous pharmaceutical giants and the Pentagon leadership participated in this theater of cruel cynicism. The fact is that while the disease affects only the representatives of the Mongoloid race, such suspicious selectivity raises questions from experts."
The lines of disinformation have both domestic and international audiences, and it seems likely that the convergence is an opportunistic matter: Iran, China, and Russia share a common adversary, the United States, and it's useful to deflect any blame for the crisis in that direction. The report describes the activity as a convergence, not necessarily a coordination, and that was partially confirmed by a comment a representative of the Global Engagement Center offered the Wall Street Journal. Lea Gabrielle, the GEC’s special envoy, told the Journal that much of the cooperation did seem to be opportunistic. But she added that there was also some evidence of coordinated action between the three governments. “Russia, China and Iran do have media cooperation agreements and I think this is important because disinformation narratives are known to originate from official state news sources.” she said.
The Chinese and Russian embassies in Washington didn't respond to the Journal's request for comment, but Iran's mission to the United Nations in New York emailed the paper as follows: “For sure, any disinformation or propaganda on the coronavirus pandemic is emanating from the U.S. administration, not Iran. U.S. media [is full of] stories of lies and disinformation spread by the administration.”
Foreign Policy notes a new assertiveness on the part of Chinese embassies around the world as they push disinformation as part of their public diplomacy. The larger cause is the increasingly hostile international scrutiny Beijing has received over its handling of the pandemic. The proximate cause, however, seems to be a familiar one: the diplomatic staff want to please the home office for careerist reasons.
US Senate Select Committee on Intelligence releases volume 4 of its report on the Intelligence Community's assessment of Russian influence operations.
The Select Committee's report, volume four of a projected five, is heavily redacted to protect intelligence sources and methods and runs to 158 pages. The full title, "Report of the Select Committee on Intelligence, United States Senate, on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Volume 4: Review of the Intelligence Community Assessment with Additional Views," gives a fair picture of the scope of the report. It's the latest in a series of reports on the Intelligence Community Assessment ("ICA," as it's called throughout the document). This review of the Intelligence Community's work is generally a favorable one, and was passed out of committee with unanimous bipartisan support.
The Committee set out to answer four questions:
- Did the ICA meet the tasking it received from President Obama on December 6th, 2016? The answer is a qualified yes, with some reservations about whether the ICA addressed the historical context of Russian active measures in the 2008 and 2012 US elections.
- Did the intelligence presented support the analysis? Yes.
- "Was the analytic tradecraft sound?" Yes.
- "Does the Committee accept the analytic line?" Yes.
"The Committee found that the ICA provides a proper representation of the intelligence collected by CIA, NSA, and FBI on Russian interference in 2016, and this body of evidence supports the substance and judgments of the ICA." The controversial Steele dossier appeared to the Committee to have played little if any role in formulation of the ICA. "Director Corney addressed the question of the dossier and its placement when asked by SSCI Chairman Burr whether he 'insisted that the dossier be part of the ICA in any way, shape, or form?' Director Corney replied: I insisted that we bring it to the party, and I was agnostic as to whether it was footnoted in the document itself, put as an annex. I have some recollection of talking to John Brennan maybe at some point saying: I don't really care, but I think it is relevant and so ought to be part of the consideration.... The Committee found that the information provided by Christopher Steele to FBI was not used in the body of the ICA or to support any of its analytic judgments. However, a summary of this material was included in Annex A as a compromise to FBI's insistence that the information was responsive to the presidential tasking."
The report is, as mentioned above, very heavily redacted. Most of the unredacted material consists of statements of the Committee's conclusions, and it's sometimes difficult to assess the grounds on which they were reached. That's probably unavoidable, given the reasonable need to protect intelligence sources and methods.
What can you do with misinformation? Block the tweets that spread it?
TechCrunch reports that Twitter intends to begin addressing one specific class of misinformation: tweets that connect COVID-19 to 5G technology. "We have broadened our guidance on unverified claims that incite people to engage in harmful activity, could lead to the destruction or damage of critical 5G infrastructure, or could lead to widespread panic, social unrest, or large-scale disorder," the company said in a tweet of its own.
What do you do with people who pass on misinformation? Lock them up?
In India, that's one approach, Foreign Policy reports. The arrests are for distributing misinformation about the COVID-19 pandemic, especially for transmitting erroneous information about protective responses to the virus, numbers of cases, and so on. At least one state, Maharashtra, has passed a law specifically directed against COVID-19 misinformation, but most cases are being brought under existing laws, some of them public health codes dating back to the Nineteenth Century.
Contact tracing for COVID-19 misinformation.
Facebook last Thursday announced its intention to introduce a kind of misinformational contact-tracing. It will be coupled with a kind of online rumor control Facebook is calling "Get the Facts," and by the introduction of some straight dope about the virus in the news feeds of users who've interacted with dubious content. It will work like this:
"We’re going to start showing messages in News Feed to people who have liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed. These messages will connect people to COVID-19 myths debunked by the WHO including ones we’ve removed from our platform for leading to imminent physical harm. We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook. People will start seeing these messages in the coming weeks."
The system depends upon Facebook's large troupe of fact checkers, and it's unavoidably a time-consuming process to execute at scale. A study by the content-moderation-friendly advocacy group Avaaz generally had good things to say about Facebook's work against misinformation, but found that it took about twenty-two days, on the average, for correction to catch up with suspect reporting.
Pandemic disinformation, astroturfing, and political advocacy.
There's been a surge in the registration of domains related to a movement to reopen normal activity in the United States, KrebsOnSecurity reports. Some of this is normal political organization and activity (customary or not, it's viewed with suspicion in a Washington Post article), but a great deal of it appears to be astroturf, either politically motivated or mounted as a ploy for donations. There's also the possibility that some (not all) of the activity can be ascribed to foreign actors. We'll have more on this trend in tomorrow's coverage.
Facebook and Instagram have become sensitive to the effects that posts with large audiences can have, and will begin displaying more information about where, geographically, the accounts involved are located. Menlo Park blogged yesterday, "[W]e’re going a step further to provide the location of high-reach Facebook Pages and Instagram accounts on every post they share, so people have more information to help them gauge the reliability and authenticity of the content they see in their feeds. We’re piloting this feature in the US, starting specifically with Facebook Pages and Instagram accounts that are based outside the US but reach large audiences based primarily in the US." TechCrunch observes that Facebook hasn't specified exactly what it considers "large" or "high-reach" to be.
Google blocks malicious coronavirus-themed emails.
VentureBeat reports that Google is blocking some eighteen-million malicious coronavirus-themed emails daily. The company explained in its Google Cloud blog the measures it's put in place to help secure Gmail users during the current pandemic. The company's Advanced Protection Program has been adjusted to adapt to the new style of threat (ZDNet has some comments on this) and G Suite's phishing and malware controls are enabled by default.
Not everyone is particularly happy with these measures. Colin Bastable, CEO of security awareness training company Lucy Security, thinks Google's response likely to be less than fully successful, in part because Google itself is conflicted:
“On the other hand, hackers use gmail accounts with spoof names in BEC fraud, and to associate gmail accounts with phishing links, in phishing campaigns. Google gets to virtue-signal while playing both side of the fence. Google are also using the “https:” certificate requirement as part of their browser war with Apple and Microsoft, kidding people into thinking encrypted browser sessions keep people secure when using Chrome. Over 80% of phishing sites use certificates. People must always ask themselves what is in it for Google. Relying on email filters, crypto and firewalls to protect remote workers from opening the door to cybercrime is naïve. Hackers only have to get lucky once and they are winning hands down. Patching people is the only way that we are going to win the war on cybercrime.”
In fairness to Google, Mountain View's own explanations of how to combat phishing emphasize training and education as much as they do technical filtering. It's unreasonable to expect technical filtering, no matter how advanced, to cope fully with social engineering. That threat plays on peoples' beliefs and desires, and those are inevitably intensional. The threat actors are after figurative hearts and literal minds, after all. Email is just the avenue of approach.
Anarchist influence operations surface during the pandemic.
Ghost Squad Hackers (GSH), an offshoot of Anonymous that, like its parent syndicate, has had a relatively low profile in recent months, has resurfaced with campaigns designed to erode trust in the governments of Australian, India, Pakistan, Thailand, and Zimbabwe. Researchers at the security firm Vigilante have been tracking the group's recent activities, and they quote its "de facto leader" (who goes by the nom-de-hack "s1ege") on the Ghost Squad's objectives. “Our intentions are to save innocent lives…to help provide justice where governments fall short or to give justice to governments when the people can’t. …But there are no lines GSH will not cross…we don’t care who the target is.” More personally, with his eyes in his own shoes, s1ege writes, “What motivates me is seeing monolithic systems fall, and the freedom of information and ensuring justice where most governments agencies fail to serve.”
It's propaganda of the deed, aimed at convincing people that governments can't take care of themselves, still less of their citizens. Dark Reading quotes Adam Darrah, Vigilante's director of intelligence: "We think the hacks are probably attempts to undermine public confidence in government at a time of universal unease due to the COVID-19 pandemic." He thinks other hacktivist campaigns will probably follow those Ghost Squad has already mounted. "The United States is a highly desirable target, and it would make sense that hacktivists would pour salt on the wounds in a country like Italy, which has had such a hard time."