Cyber deterrence: attribution and ambiguity (and certainty, too).
N2K logoSep 7, 2016

Cyber deterrence: attribution and ambiguity (and certainty, too).

Cyber deterrence is still in its infancy, roughly where nuclear deterrence was in 1950. That said, while there may be some instructive analogies with nuclear deterrence, those analogies may be imperfect at best.

Moderated by Terry Roberts (Founder & President, WhiteHawk), the panel included Shawn Henry (Crowdstrike), Dr. Greg Shannon (White House Office of Science and Technology Policy), Sean Kanuck (former National Director of Cyber, Office of the Director of National Intelligence), and Lieutenant General Kevin McLaughlin, USAF, (Deputy Commander, US Cyber Command).

Roberts invited each panelist to offer preliminary remarks. General McLaughlin began, saying that his command's top priority is to operate and defend Department of Defense networks. Protection of these networks in a broader sense would depend upon deterrence. Cyber Command is also charged with defending the nation against high-consequence cyber attacks. Deterrence, including cyber deterrence, isn't necessarily confined to "cyber nails and cyber hammers." Deterrence involves a full range of capabilities to impose costs, deny benefits, and demonstrate resilience. And cyber tools may also be used to deter adversary behavior in non-cyber areas.

Sean Kanuck stressed that he's no longer the National Intelligence Officer for Cyber, but said he'd start from that perspective. He quoted DNI Clapper on the difficulty of deterrence in cyberspace: low barriers to entry, low cost, and lack of consequences. Whether cyber attacks can, as a matter of national policy, be deterred depends upon which actor you're trying to deter. Criminals, activists, and ideologues can be handled as a law enforcement problem.

Deterring state actors is more complex, Kanuck noted. What would we do, he asked, to deter Pyongyang from repeating the Sony hack, for example? Deterrence policy necessarily rests on certainty (attribution), capability, and the credibility of our response. He noted, as had General McLaughlin, that we're not limited to retaliation in kind. There are, of course, legal threshold triggers that determine what you can do. Credibility depends on what we're prepared to do. Are we, for example, willing to go kinetic in response to the doxing of a political figure? How about manipulating an election? Probably not, but these questions have no obvious answers. Kanuck closed by saying that he thinks cyber deterrence is where nuclear deterrence was in the early 1950s. But it's harder than that, he added. Cyber conflict is "massively multipolar," and domestic threats are in some cases inextricable from international threats. And cyber tools are different from kinetic munitions. They're perishable—upgrades change effects. They need to be highly specific. They're not like a 9mm bullet or a JDAM. And cyber weapons can be immediate in their effects. This puts us in a much more precarious position than we've been with other deterrence regimes. Cyber weapons are also not discriminating, nor do they avoid targeting civilians. Finally, such weapons are unpredictable. Every operation has some unexpected consequences. The reality is that there is a wide range of actors who are not presently being deterred.

Shannon asked, from the perspective of OSTP, what a science and technology program that could serve cyber deterrence would look like. We haven't sufficiently focused, he suggested on the fact that various techniques can be effective in advancing deterrence. You want an adversary to find it difficult to discover and exploit a vulnerability. You want it to be difficult for an attack to go undetected, and you want it to be difficult to avoid attribution. Science and technology that could advance all of these desirable difficulties are on the way.

Taking up the other panelists' themes, Henry asserted flatly that sophisticated adversaries with time and resources will get into your networks. He believes that, if we can't prevent attacks with 100% certainly—and we can't—then we need to move to deterrence. We do so by increasing the costs to the adversary. Imposing costs goes beyond technology. It's about policy, process, and strategy. We haven't heard a lot yet about the private sector's role yet, he noted, even though the private sector is often our first line of defense. We haven't built an effective intelligence-sharing capabiltiy (not, he stressed disdainfully mere information sharing). Developing credible deterrence models is crucial, and it's essential to recognize that deterrence is going to remain an inherently governmental responsibility.

The panel closed with an extended discussion of attribution—it's difficult, but not impossible, with a discussion of the extension of the laws of armed conflict to cyberspace—still very much a work in progress, and, paradoxically, the necessity of ambiguity—you want to adversary to be confident you'll retaliate at some point and in some way, but drawing red lines is a loser's game.