At a glance.
- Anonymous has its blue check mark?
- Uncertain narrative control (and revival of a Stalinist lie).
- Report: the alleged Discord Papers leaker shared earlier and more widely than previously known.
- KillNet warns of NATO disinformation (and engages in some disinformation of its own).
- Tells aren't just for the card table.
Anonymous has its blue check mark?
Joseph Menn looks at Anonymous's Twitter feed, sees the coveted (and not free) blue check mark, and has a couple of questions. "Now wait one second, what phone number did they use? And don’t tell me you accepted their CREDIT CARD."
To be sure, Mr. Menn is tweeting archly in the general direction of the bluebird of online happiness, but there are serious concerns being raised about Twitter's revisions to its verification policies. As one of the "very large online platforms (VLOPs)" defined by the EU's Digital Services Act (DSA), Twitter is expected, as TechCrunch explains, "to take steps to mitigate systemic risks like disinformation, while breaches of the regime can attract penalties of up to 6% of global annual turnover." The platform's recent changes have prompted a negative response from EU regulators. In tweets that accompanied reposting of an AP story about the ways in which Twitter had become more easily exploitable by Russian and Chinese disinformation operators, the European Commission's Vice President for Values and Transparency, Věra Jourová, wrote, "This is yet another negative sign from #Twitter on not making digital information space any safer and free from the Kremlin #disinformation & malicious influence. To me this is a signal that #Twitter is falling short of its commitments to the anti-disinformation Code.This is a paramount test to show they are serious about respecting the Code and ultimately compliance with the #DigitalServicesAct."
There's been some to-ing and fro-ing with respect to Twitter's verification and content moderation policies recently, and the EU at least is interested in seeing the platform reach a moment of clarity. And the EU's principal objection recently has been Twitter's alleged amenability to the distribution of disinformation.
Uncertain narrative control (and revival of a Stalinist lie).
The UK's Ministry of Defence (MoD) devoted Saturday morning's situation report to the difficulty Moscow is having maintaining narrative control over its war against Ukraine. "The Russian state is struggling to maintain consistency in a core narrative that it uses to justify the war in Ukraine: that the invasion is analogous to the Soviet experience in the Second World War. On 18 April 2023, Russian state media announced the cancellation of this year’s Immortal Regiment ‘Great Patriotic War’ remembrance marches on ‘safety’ grounds. In reality, the authorities were highly likely concerned that participants would highlight the scope of recent Russian losses."
Some of the loss of narrative control can be attributed to fissures within the wider Russian defense establishment, including its contract fighters and auxiliaries. "This follows Wagner Group owner Yevgeny Prigozhin publicly questioning whether there are actually any ‘Nazis’ in Ukraine, going against Russia’s justification for the war." Prigozhin shouldn't be mistaken for any sort of pacifist. He's recently called for the conquest not only of all Ukraine, but of Poland as well. But he frequently complains of poor government logistical support for his Wagner Group mercenaries, and has been critical of the leadership in the Russian Defense Ministry. Their offense is to cling to an implausible story of justification and not wage an even harder war.
The most recent justificatory Kremlin myth-making amounts to the simple revival of utterly discredited Soviet-era lies. The MoD adds,"The authorities have continued attempts to unify the Russian public around polarising myths about the 1940s. On 12 April 2023, state news agency RIA Novosti reported ‘unique’ documents from FSB archives, implicating the Nazis in the murder of 22,000 Polish nationals in the Katyn Massacre of 1940. In reality, FSB’s predecessor agency, the NKVD, was responsible." The Russian Duma itself has gone on the record to debunk the lie that Soviet forces weren't responsible for the massacre, but such truth-telling has now been overcome by events. "Russia’s State Duma officially condemned Joseph Stalin for ordering the killings in 2010." Thus a rehabilitation of Stalin and Stalinism seems to be in progress.
Report: the alleged Discord Papers leaker shared secrets earlier and more widely than previously known.
The New York Times reports that its found signs that Airman Jack Teixeira, who faces US Federal charges in the Discord Papers case, began sharing highly classified intelligence about Russia's war against Ukraine earlier than had hitherto been reported, and that he appears to have done so in a second Discord channel that was much large than the Thug Shaker Central group he's been associated with. "In February 2022, soon after the invasion of Ukraine," the Times writes, "a user profile matching that of Airman Jack Teixeira began posting secret intelligence on the Russian war effort on a previously undisclosed chat group on Discord, a social media platform popular among gamers. The chat group contained about 600 members." The Times also reports that the Airman also direct-messaged foreign members of the group offering to tell them more about the information he had available: “DM me and I can tell you what I have.” The evidence connecting Airman Teixeira and the second Discord group is circumstantial but compelling. Neither his defense attorney, the FBI, or the US Justice Department were willing to comment to the Times on its story. The Discord Papers have been cited to advance opposing narratives surrounding Russia's war and American intentions with respect to that war.
KillNet warns of NATO disinformation (and engages in some disinformation of its own).
This morning KillNet released a statement warning Russian citizens to be aware of disinformation campaigns from Ukraine and "The West." Specifically, the hacktivist auxiliary explains that, “The Ukrainians and NATO will use the talks between China and Zelenskyy as a catalyst for information attacks and influence towards the citizens of Russia and its military.” Regarding the expected Ukrainian counter offensive KillNet gave three possible scenarios:
- The counteroffensive could be called off due to the heavy Ukrainian casualties in Bakhmut.
- The counteroffensive will take place as expected, and the Ukrainian forces will use Western-supplied equipment to take back a small amount of land.
- The attack is a bluff to intimidate Russia and its military.
They overlook other possibilities, of course (like a general Russian collapse).
Tells aren't just for the card table.
Poker players and other human lie detectors look for “tells,” that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when he’s about to bluff, for example, or someone’s pupils dilate when she's successfully drawn to an insider straight in a hand of High Chicago.
It seems that artificial intelligence also has its tells, at least for now, and some of them have become so obvious and so well known that they’ve become Internet memes. KnowBe4's blog offers some reflections on what the large language models are up to. And so does Motherboard. ”ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter,” Vice’s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. “When you ask ChatGPT to do something it’s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: ‘As an AI language model, I cannot generate inappropriate or offensive content,’ it said. Those two phrases, ‘as an AI language model’ and ‘I cannot generate inappropriate content,’ recur so frequently in ChatGPT generated content that they’ve become memes.”
Thus the large language models tell their truth, that is, the truth about themselves, when they’re questioned. “As an AI language model,” etc. will, for now at least, serve to alert people that they’re seeing content written by some bot. Motherboard points out that these tells are characteristic of “lazily executed” AI. With a little more care and attention, they’ll can surely be more persuasive. After all, if you're a disinformation operator, what do you care if someone's going to be offended by something the AI says? Toughen up, snowflake, as they probably say in St. Petersburg.
One risk of the AI language models is that they can be adapted to perform social engineering at scale. It’s unlikely that they’ll forever begin their phishing and vishing with “As an AI language model, I can’t actually be the widow of a Nigerian prince, but my heart has been moved to reach out to you,” etc. In the future, delete all before "but." And the impostures are likely to be more nefarious than a crude attempt to winkle some sympathetic recipient out of some cash. Volume of expression is all too often taken as an index of popular sentiment, and as the AI is better crafted, it will grow more convincing.
The director of Indiana University's Observatory on Social Media, Filippo Menczer, told Motherboard, “We occasionally spot certain AI-generated faces and text patterns through glitches by careless bad actors,” he said. “But even as we begin to find these glitches everywhere, they reveal what is likely only a very tiny tip of the iceberg. Before our lab developed tools to detect social bots almost 10 years ago, there was little awareness about how many bots existed. Similarly, now we have very little awareness of the volume of inauthentic behavior supported by AI models.” There's no obvious solution. Human moderation, for example, scales only with all the difficulty a trained, labor-intensive process brings with it.