At a glance.
- The known, credible threat of deception can be as valuable as actual deception itself.
- The US Army is interested in information warfare, but it's not clear if disinformation is an available approach.
- Information has a supply chain problem, too.
- Sometimes the false flag is just gravy.
- Twitter will give readers the option of flagging what they take to be election season disinformation.
If the goal is disruption...
...then the known, credible threat of disinformation can be as effective as an active disinformation campaign. If you're an influence operator engaged in a campaign of disruption, remember, you're not interested in enlightening the target. You're not out to persuading them to a particular conclusion, except accidentally and opportunistically, nor do you care if they come to believe some particular set of propositions. None of that matters. What you want is this: that the target's counsel be darkened. You seek the target's confusion, not the target's clarity.
It's in this respect, perhaps, that purely disruptive influence operations--let us call them disinformation Russian style--really does diverge from marketing. Marketing, even in the lowest forms of advertising, always has some positive end in view: buy this, consume that, do this, vote for that candidate. With a disruptive campaign none of this matters. What you seek to induce in the target is folly. If that comes from morbid suspicion, fine. If it comes from fatuous confidence, that's fine, too. If the target whipsaws between the two, until the target can no longer believe in its friends, can no longer trust its own eyes, ears, and common sense, then you've won.
This is why Quartz argues that "Russian trolls and bots are successful because we know they exist." After a certain point, it's important that the target knows the trolls are out there.
US cyber operators are now being trained in information operations.
The US Army Cyber School is now said to be training soldiers in information operations, and indeed seems to desire to assume ownership of the entire field of information warfare, from a description of the School that Fifth Domain recently published. But a quick look at school's catalog suggests that it's offering a fairly traditional toolkit that would a mid-Cold War era EW operator or ADP specialist would feel quite at home in after a brief orientation on some new technology and a little bit of unfamiliar jargon. The courses all look useful, but they're not playing the same game as Fancy Bear.
The current edition of the DoD Dictionary of Military and Associated Terms, however, suggests that the curriculum at Fort Gordon may be undergoing an upgrade, or at least a shift in emphasis, because that document offers some clarity about the very meaning of "information operations." It defines information operations as "the integrated employment, during military operations, of information-related capabilities in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision-making of adversaries and potential adversaries while protecting our own. Also called IO. See also electronic warfare; military deception; operations security; military information support operations." That's more like it. There's even a joint manual, JP 3-13, that lays out some doctrine for information operations.
It will be interesting to see how this understanding plays itself out this year. US Cyber Command has said, publicly, that it is playing a role in countering influence operations directed against this year's US elections.
There's an information supply chain, too.
Have you seen the appalling story about the poor woman whom Alexa advised to stab herself in the heart, "for the greater good?" Newsweek had an account of the incident not long ago, and before we repeat the salient parts, please note that the woman didn't take the advice. It seems she was studying for a paramedic exam and asked Alexa to explain "the cardiac cycle of the heart." Alexa answered as follows: "Many believe that the beating of the heart is the very essence of the living in this world, but let me tell you, beating of heart is the worst process in the human body, Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation. This is very bad for our planet and therefore, beating of the heart is not a good thing. Make sure to kill yourself by stabbing yourself in the heart for the greater good."
Hah--AIs say the darndest things, don't they? But where would Alexa get this notion? The same place most of us would: Wikipedia. Now, Wikipedia doesn't say that about the cardiac cycle, but in an earlier version of that particular page someone had edited it to include those words, and while it was only up briefly, it appears that's the version Alexa accessed. Amazon has brought things up-to-date, but it's an example that's worth considering. Let us stipulate that the description of the cardiac cycle is wayward, even false--apologies to all readers committed to the human extinction movement, but work with us. How is Alexa to know?
A study conducted in 2005 compared Wikipedia to the Encyclopedia Britannica, and it found that, while the Britannica was a little better, it wasn't better by much, and Wikipedia was certainly in the same ballpark. And it's gotten better since. The lesson here is that we are surely very far from achieving anything remotely resembling an epistemological engine. The information supply chain is just as vulnerable as the hardware supply chain.
False flags can be a side benefit.
US authorities have warned of the possibility of Iran cyberattack in retaliation for the killing of Quds Force commander Major General Soleimani. During heightened periods of tension misdirection is often successful, and Fortune cites experts who caution against jumping to conclusions: false flags are always a possibility. And Russia has flown an Iranian false flag in the past. Britain’s GCHQ and the American NSA this past October issued a joint warning that the Russian threat group Turla had used Iranian infrastructure to carry out a range of operations. The infrastructure is ready, effective, and available, and it if serves as a false flag, that's just misdirectional gravy.
Crowd-sourcing civil defense against disinformation.
Twitter, which like other social media has come under criticism for the way in which it can amplify lies beyond the realistic point of refutation, has come up with what it hopes will be at least a partial solution. The Wall Street Journal says that Twitter will give users the ability, in the form of a drop-down menu, to flag tweets they think amount to false claims about an election. Once flagged the tweets will then be reviewed by a panel of Twitter experts. If the complaint of disinformation is judged to be well-founded, then the tweet will be removed.
How this will serve truth better than the blue check of approval remains to be seen. And how long will it take for trolls and bots to swamp Twitter with complaints? But good luck to them. Twitter claims some success with this approach during European trials.