At a glance.
- Nine Network knocked offline before airing a report on Novichok.
- German politicians' email accounts hacked by GhostWriter.
- More reviews of US Cyber Command's bear meme.
- Twitter impersonations surround a vote on unionizing Amazon's workforce.
- Friction to impede the spread of mis- and disinformation.
- Artificial intelligence and the algorithms that drive it.
It's April Fool's Day. As usual we're going to sit this sort-of holiday out, but we'll issue a ritualistic warning for the day: stay skeptical. (Caveat lector, atque auditor...) And if you're tired of April Fool's Day, you'll find kindred spirits over at Gizmodo.
Suppressio veri in Australia.
Channel 9 Australia sustained a cyberattack over the past weekend that knocked much of its programming off the air. The Sydney Morning Herald describes the attack as "some kind of ransomware likely created by a state-based actor," with speculation suggesting either China or Russia as the country of origin. That is, the attack looks like ransomware, but it may be a simple destructive attack, like NotPetya, especially since no ransom demand has been received.
Sino-Australian relations have grown frostier over the past year, and Russia has a more proximate motive to hit Nine--they may not care for some of the outlet’s reporting.
In any case, TVBlackBox is calling the attack for Moscow, and says it appears to have been an attempt to disrupt broadcast of a Nine investigative report on Russia's use of Novichok nerve agent against dissidents, spies, and other undesirables. (Novichok also killed at least one entirely uninvolved person in the UK as sad collateral damage in an unusually reckless and ruthless GRU operation.)
Nine seems to think it was the Russians, too, or at least some of its on-air talent does. When Nine got back on the air this morning, albeit in a somewhat degraded form (they were using hand-drawn graphics, for example, their regular computers being unavailable, and they experienced some brief dead air) their Weekday host Karl Stefanovic asked for the audience’s understanding and indulgence: “Bear with us as we try and work around these technical issues caused by Vladimir … We’re not blaming anybody in particular.”
The Australian Cyber Security Centre is helping Nine. The Australian Financial Review quotes the agency as saying, “The ACSC is aware of a cyber incident impacting the Nine Network and has offered technical assistance.”
Suggestio falsi in Germany.
In what may have been a preliminary move in a disinformation campaign, several members of Germany's Bundestag have had their personal email accounts breached, CyberScoop says. The BfV and BSI security services have briefed the federal legislative body and contacted affected members. German officials have provided few details, but Tageschau reports that the compromise was the work of Ghostwriter (a threat actor associated with Russian interests) and that spearphishing was the attack vector. It also suggests that Russia’s GRU was responsible.
Der Spiegel is calling it a Russian operation, and also specifically attributing it to the GRU, the Russian military intelligence agency. Seven members of the Bundestag were affected, as were thirty-one members of Land parliaments, that is, parliaments belonging to the Federal Republic’s constituent states, roughly the equivalent of US state legislatures. “Several dozen” other political figures were also affected. Most of the targets were members of the two largest German political parties, the center-right CDU/CSU and the center-left SPD.
Security firm FireEye's 2020 account of Ghostwriter described it as a disinformation peddler. “The operations have primarily targeted audiences in Lithuania, Latvia, and Poland with narratives critical of the North Atlantic Treaty Organization’s (NATO) presence in Eastern Europe,” the company’s report said, “occasionally leveraging other themes such as anti-U.S. and COVID-19-related narratives as part of this broader anti-NATO agenda.” FireEye didn’t go so far as to identify the group as a unit of the Russian government, but objectively, as people say, Ghostwriter acted in the Russian interest.
German security services have warned that follow-on operations should be expected.
"Greetings, fellow youths..." (or fill in your own words to that effect).
"Trying too hard to engage online with a younger crowd and missing the mark," is Military.com's assessment of the US Defense Department's "Silly Bear" meme campaign, which chimes with what RT wrote about it last week.
Last week Reuters reported that an official in Russia's Defense Ministry complained that the US was waging a propaganda campaign designed to undermine both the institution of Russia's presidency and President Putin personally. The Silly Bear campaign is only a small part of what Russia sees as a US influence offensive, which aims at destabilizing the country’s “civilizational pillars."
“A new type of warfare... is starting to appear. I call it, for the sake of argument, mental war. It’s when the aim of this warfare is the destruction of the enemy’s understanding of civilizational pillars,” Andrei Ilnitsky, who advises the Defence Minister, said in a television interview. His account of the US target list is interesting. The United States was also said to be using economic and “informational” measures in attempts to undermine Putin, the presidency, the army, the Russian Orthodox Church and Russian youth. Subsequently asked for comment, a government spokesman, Dmitry Peskov, concurred. He said, “A deliberate policy to contain and keep Russia down is being pursued. It is absolutely constant and visible to the naked eye.”
So, in this case, Silly Bear would appear to be saying, greetings, fellow (Russian) youths.
Not union busting (but maybe it could have been).
Technology Review has an account of what it characterizes as "deepfake" bogus Amazon workers tweeting about a union vote being taken at Amazon. "Deepfake" seems excessive, especially since these seem to be fairly ordinary social media impersonations, commonplace catphishing and trolling, probably with parodic intent. This Twitter thread offers a summary of what's being posted. "The profiles used deepfake photos as profile pictures and were tweeting some pretty laughable, over-the-top defenses of Amazon’s working practices. They didn’t seem real, but they still led to confusion among the public. Was Amazon really behind them? Was this some terrible new anti-union social media strategy? The answer is almost certainly not—but the use of deepfakes in this context points to a more concerning trend overall."
Metaphorical friction as an obstacle to the spread of misinformation and disinformation.
WIRED published an essay in which it advocates "old-fashioned friction" as a desirable impediment to the spread of lies and delusions. After noting that the spread of false rumor is as old as the spread of news, the article discusses what's new about today's situation: "Regardless of the era, rumors and falsehoods spread via two basic steps: discovery, then amplification of unverified knowledge. What’s different now is that today’s communication platforms have fundamentally transformed the way information flows, propelling viral rumors exponentially faster and farther than ever. Widespread belief in certain types of viral rumors poses a threat to institutions that we rely on, including democracy itself."
A partial answer, at least, to the rapid dissemination and amplification of falsehood, the essay argues, is friction. Traditional journalism as it evolved slowly into more reliable and trustworthy, albeit imperfect, forms introduced friction in the form of fact-checking and editorial review. The WIRED essay argues that the relatively frictionless web should seek to introduce some such mechanisms to mediate between discovery and dissemination. "Social posts are not news articles, even if they’ve come to resemble them in our news feeds, the essay concludes. "Verifying new information is a core part of any functioning democracy, and we need to recreate the friction that was previously provided by the journalistic process."
Some of the responsibility for verification lies with the platforms, but users necessarily play a vital critical role, too. NPR has an account of a cooperative effort underway in Florida schools to inculcate a critical spirit in students by teaching them "digital literacy."
Artificial intelligence and the algorithms that drive it.
One reason mediation and verification have proven difficult to extend to social media is that they're labor intensive and therefore expensive. The tech sector continues to look for ways of automating content moderation and fact-checking, but with limited success. Venture Beat summarizes Facebook's anti-bias tools as "hopelessly inadequate." That's not for want of trying, but rather is a function of the inherent difficulty of training AI: human biases and errors are transmitted to the artificial intelligence people create, and once there, the AI amplifies them without correction. Nick Clegg, writing on behalf of Facebook, defensively but on the whole correctly points out that "it takes two to tango," that is, you and the algorithm.
There seems to be, however, a kind of epistemic entropy at work with AI. Consider the potential for delusion, deception, and the automatic production of nonsense in two increasingly capable text-generation systems, newcomer Eleuther and its more established rival, OpenAI's GPT-3. They can be used for partial automation of customer service, generation of dialogue in virtual reality environments, and improvement of search results. And it's a lead pipe cinch that these applications or ones like them will be used to remove the distinctive stylistic stigmata of phishing, robocalls, and other forms of social engineering.