At a glance.
- Mis- and disinformation about the Beirut explosion.
- Coordinated, cross-platform trolling.
- Content attribution.
- Doxing for election influence in the UK.
- China's efforts to compromise Vatican networks.
- A catphish succumbs to COVID-19?
Shallow fakes and the Beirut tragedy.
Tuesday's explosion in Beirut, about which the BBC has a good summary of what's currently known, was apparently accidental, with negligent handling of cargo probably a major contributing cause. But the disaster immediately spawned both mis- and disinformation. Much of the disinformation was of the foreseeable, politically motivated variety: It was the Israelis (illustrated with bogus pictures of Prime Minister Netanyahu purportedly pointing to the location of the explosion in an aerial photograph some time before the blast). It was the Americans (because they're into everything). It was Hezbullah (with plenty of enemies in the region). The misinformation was driven largely by loose talk, a priori suspicions, and ignorance of the kinetic world. For example, one story widely circulated in social media held that the explosion was a nuclear weapon, because it left a "mushroom cloud," and everyone knows that atomic bombs do that. Well, that they do, but they do so because they're explosions, not because they're atomic: all large explosions around ground level throw up a mushroom cloud.
Most odious, it seems, was the disinformation foisted on the gullible by people with no obvious motive beyond the libido ostentandi, the will to draw attention to themselves. Online trolls ("attention-seeking ghouls" would be better) were quick to jump on the horrific and tragedy and human suffering in Beirut, offering doctored videos that seemed to show an attack moments before the blast. One that we saw (and will not link to) was represented as video of a missile strike that hit the port. It looked like b-roll of a Patriot anti-aircraft missile in its boost phase, motor burning and headed straight for its target on the ground. Obviously bogus, but if you've never seen a missile, you might think, well, who knows? Looks legit. The BBC has an account of how the Beirut disaster will spawn, already has spawned, conspiracy theories. The article closes with good advice: "It's an important reminder that breaking news events are a fertile time for misinformation and speculation online. Think before you share."
Google takes down YouTube coordinated trolling.
YouTube announced late yesterday that it had banned 2596 accounts emanating from China, some of which had been engaging in "coordinated influence operations," TechCrunch reports. Google's Threat Analysis Group summed up its observations on the second quarter of 2020, and said that Chinese political influence campaigns spiked in May and June. Lower but still significant levels of activity were seen from Russian and Iranian actors. A number of the bad actors flagged from China were engaged in ordinary mercenary spamming, but a significant fraction of the accounts suspended were working for political influence. Google notes that the Chinese campaign was a cross-platform effort not confined to YouTube: it was active in other social media as well.
TechCrunch points out that Google's findings are similar to findings published by Graphika this past April in its report, Return of the (Spamouflage) Dragon. The scale of the Chinese operation is striking, but Beijing isn't the only player. It's easy, moreover, for anyone to spread disinformation, as student of disinformation Nina Jankowicz said in an interview with CBS News.
Digital content attribution against deep fakes.
It's not a comprehensive detection method, but it could develop into a useful adjunct against faked imagery.
The Content Authenticity Initiative (CAI), a group formed by Adobe, the New York Times, and Twitter to develop a standard for digital content attribution, published a white paper (summarized by Axios) laying out their proposed solution to the problem of deepfakes and other doctored online content. CAI's system focuses on verifying the legitimacy of original content rather than detecting content that's been tampered with. The group's proposal involves implementing technology that generates a set of assertions and a digital signature (called a "claim") each time an image or video is created, altered, posted on social media, or has some other action performed on it. This claim (or a link to the claim) is stored in the file's metadata where it forms a kind of timeline enabling users to see if and how a file has been altered since its creation. CAI says the standard could be integrated into hardware and software products and implemented by social media platforms.
However, the system doesn't prevent someone from deleting the metadata or taking a screenshot or recording of a file, then modifying it and presenting it as an original. As a result, CAI recommends that its proposed solution be used in combination with other methods, such as similarity detection and trusted timestamps, in order to increase a file's context.
Documents used during the last UK general election may have come from an email hack.
Reuters reports that papers related to UK-US trade negotiations that were leaked to the Labour Party and others during the last British general election were taken from the email account of former Conservative trade minister Liam Fox. The documents were represented as evidence of plans the Tory government had to “privatise” the National Health Service and turn it over to American for-profit control. This story was far-fetched and implausible even by the standards of electoral politics, and, while the leaked documents were waived by Labour leader Jeremy Corbyn on camera in a campaign photo op, the narrative gained little traction.
The theft has been widely attributed to Russian intelligence services. British foreign minister Dominic Raab last month said “Russian actors” had sought to interfere in the election “through the online amplification of illicitly acquired and leaked Government documents.” An investigation into how the documents were taken is still in progress.
Lest anyone be too quick to put the incident down to another sad case of operator headspace, and the then-trade minister's technological cluelessness, it's worth remembering that spearphishing can be very difficult to detect. The Cambridge Independent got some useful context from its local stable of security experts. A great deal of phishing, the kind that people are on their guard against, is fairly obviously financially motivated. But a nation-state's intelligence services have the time and resources to craft compelling, non-obvious phishbait that would deceive even the elect (or in this case at least the elected).
RedDelta accused of hacking the Vatican.
Recorded Future researchers say a Chinese state-sponsored APT, "RedDelta," infiltrated the networks of the Vatican, the Catholic Diocese of Hong Kong, and several other Catholic organizations ahead of the upcoming renewal of the Vatican's controversial provisional agreement, under which the Chinese government was granted more control over the "underground" Catholic Church within the country.
The attackers used well-crafted spearphishing documents to deliver the PlugX malware to the targeted entities. The campaign displayed significant overlaps with previous operations by the threat actor tracked as "Mustang Panda," but Recorded Future attributes it to RedDelta based on several notably distinct TTPs.
The researchers conclude that "[t]he targeting of entities related to the Catholic church is likely indicative of CCP objectives in consolidating control over the 'underground' Catholic church, 'sinicizing religions' in China, and diminishing the perceived influence of the Vatican within China’s Catholic community." They also add that the campaign "demonstrates that China’s interest in control and surveillance of religious minorities is not confined to those within the 'Five Poisons,' exemplified by the continued persecution and detainment of underground church members and allegations of physical surveillance of official Catholic and Protestant churches."
China's Foreign Ministry denied that it did anything. A representative tweet said, "Regarding reports saying that Chinese state-backed hackers have attacked the Vatican and the Catholic diocese of Hong Kong, China firmly opposes and fights all forms of cyber thefts and attacks. Solid evidence rather than speculation is needed when investigating cyber events." (That's a familiar trope, by the way, in official responses from Moscow as well: show us the evidence, and then we can sit down and figure it out together.)
CTOVision offered a sensible headline on Beijing's denunciation of Recorded Future's study: When The PRC Ministry of Foreign Affairs Publicly Denounces Your Analysis You Are Probably On To Something ('If you’re taking flak you must be over the target')." It's not an infallible rule, to be sure, since even truth-tellers can be armed with flak, but in this case it sounds right.
Idols of the tribe.
A very strange case of catphishing was exposed this week. A moderately popular Twitter account, @sciencing_bi, had for some time (years, apparently) represented itself as a platform for someone who was, in Vice's characterization, "a queer, Indigenous Arizona State University professor." But the professor never existed at all: she was the woman who never was, a catphish for those who saw their concerns expressed in @sciencing_bi's tweets. The catphish had tweeted extensively about her struggles with Arizona State and how the university's forcing her to teach had exposed her to COVID-19 (and inter alia posted a fair bit of denunciation of COVID-19 misinformation). This past Friday it was announced that the professor had “died from COVID-19." There were expressions of mourning and loss, many of which came from a neuroscientist and former Vanderbilt faculty member, BethAnn McLaughlin, who had been close to the anonymous anthropologist.
Arizona State said they had no one on the faculty who matched the professor's description, and other grounds for skepticism occurred to others over the weekend. Early this week Twitter suspended the @sciencing_bi account. And on Tuesday the New York Times confirmed that the life and death of the professor were indeed an imposture, and that the catphish had been created and managed by Ms McLaughlin. She told the Times (through her lawyer) that, “I take full responsibility for my involvement in creating the @sciencing_bi Twitter account. My actions are inexcusable. I apologize without reservation to all the people I hurt.” Ms McLaughlin says she's getting help for her problem.
"The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it." So wrote Francis Bacon, when he described a common source of error in his Novum Organon. Nowadays it's called "confirmation bias," and it's surprising how far it can carry one. Few if any of us are immune to it.