Summing up the Intelligence and National Security Summit
N2K logoSep 7, 2016

Summing up the Intelligence and National Security Summit

We close our coverage with a quick look back at the annual meeting of intelligence specialists. This year's summit had a strong focus on cybersecurity. The topic was not only addressed repeatedly in the plenary sessions, but it was the focus of one of the conference's three breakout tracks. That cyberspace is of prime concern to the Intelligence Community and those who support it is unsurprising, but a walk through the exhibitor's hall offered some striking confirmation: cyber security vendors dominated the space.

Also interesting was the clear sense that the leaders INSA and AFCEA drew to the summit were working through some of the same theoretical, practical, and conceptual issues defense thinkers have grappled with over the past century and a half. How those issues will be resolved in cyberspace is in some cases clear. Elsewhere it remains murky.

Cyber risk management: plans, drills, and communication.

One area in which the direction forward seems relatively clear is in cyber risk management. The panel in the Cyber Track that took this up had a decided industry perspective, but the lessons they drew for C-suites and boards were reminiscent of those learned by military forces during periods of reform.

The discussion of risk management dealt to a certain extent with the still imperfectly solved actuarial problems of quantifying and transferring risk, but far more time was devoted to the tactics, techniques and procedures senior managers should adopt to drive down the risks of a cyber incident. These came down to frameworks, communication, planning, team building, and exercises. (The military analogues might be doctrine, planning, organization for combat, and realistic drills.)

Boards need a framework (and while NIST was much recommended, the panelists were clear that it was more important that there be a framework than that an enterprise adopt any particular one) to organize their understanding of risk. That understanding should concentrate on discerning what could threaten the business (again, in a military analogy, what could cause the mission to fail). The framework should enable security and technical professionals to communicate with business leaders in business language to help them achieve clarity and take the right decisions to drive down risk.

To do so requires planning, and incident prevention and response planning require that an enterprise be organized for cyber defense in the sense that its members understand their responsibilities and are resourced to carry them out. And the plans need to be exercised. That not only serves the training and education function one associates with rehearsals, but it can also reveal gaps, flaws, oversights, and misunderstandings. (Common oversights in cybersecurity plans include leaving out privacy and legal players, failure to think through public relations, and not determining when an enterprise needs to go to law enforcement and other government agencies for help.) Once those gaps are revealed, the appropriate leaders can correct them and see how the revised plans fare in the next cycle of exercises.

There was general agreement that tabletop exercises are affordable and valuable ways of exercising and testing plans. Businesses looking for a model might want to look at the evolution of the Unites States Army's Combat Training Centers for a historical model of how realistic training at all levels provided a virtuous circle of planning, exercise, review, and improvement.

Information-sharing and trust, and the questionable value of reorganization.

There was general consensus that information sharing was invaluable, because there was general consensus that cybersecurity is a common problem that not only partners and allies faced together, but on which even competitors and adversaries could profitably cooperate.

While the US Intelligence Community agencies that spoke generally presented a unified picture of successful information sharing within the IC, the picture outside the IC was much less clear. This was evident from the panel on the the Cybersecurity Information Sharing Act (CISA). CISA was intended to make it easier for industry to share information with the government, but stopped short of making such reporting mandatory. Greg Touhill, currently Deputy Assistant Secretary of Cybersecurity and Communication at the Department of Homeland Security (and whom President Obama designated last week as the new Federal CISO) offered a familiar perspective on how such information sharing should work: he described himself as "the captain of the cyber neighborhood watch." Thus information sharing should in the first instance be understood as an instance of a common, good citizen's responsibility to let the right authorities know when something's amiss.

CISA intended to remove obstacles to information sharing, most of which involved reputational and legal concerns. Michael Allen (Partner, Beacon Global Strategies) noted that skepticism about the law persisted. He saw three issues in particular:

  1. "A corporate cultural mindset that's naturally disinclined to share information with the US Government." Companies need reassurance that they won't face regulatory blowback from their good-faith efforts to share cyber information.
  2. "It's got to be as easy as possible to join" any information-sharing system or community. The Government should err on the side of simplicity.
  3. All too often, the "quality of the information isn't good enough." A program to encourage information sharing will stand or fall with the value it delivers. If it's not generating information a business can use, any such program will fail.

CISA does appear to have made at least two advances. It authorizes companies to monitor networks and deploy measures to prevent attack. This gives the companies a clear legal framework within which to operate. It also provides a degree of liability protection (and protection against anti-trust action, which facilitates private-to-private sharing). But as always, whether the Automated Indicator Sharing (AIS) platform the Department of Homeland Security established post-CISA succeeds will come down to whether it adds value for the participants. The sector- and region-specific ISACs (Information Sharing and Analysis Centers) have shown the ability to mitigate the effects of certain attacks. (Citbank's James Katavolos cited success against some financial sector distributed denial-of-service attacks as a positive result of information sharing.) Whether AIS can enjoy similar success is still an open question: very few companies have signed up for it. One issue surrounding AIS points to a dilemma affecting information sharing generally. If, as consensus holds, effective sharing depends on both widespread participation and relationships of trust, can a widely available and easily accessible system still earn the trust of its members?

Several panels and speakers alluded, generally positively, to the potential for international information sharing. Robert Silvers, Assistant Secretary for Cyber Policy at the US Department of Homeland Security, said that he regarded information sharing as inherently transnational. The Department of Homeland Security "wants to be the world's clearing house for cyber threat indicators." (A number of other speakers also stressed that even international adversaries—even Russia, to take what seemed by consensus to be the hardest sane case—were willing to cooperate in some areas of common interest.)

Silvers thought he'd seen encouraging signs internationally, but he also noted what other speakers brought up independently: a lot of challenges remain domestically. Companies have had various understandable reasons for hesitating to share. They're concerned about spooking customers, investors, and partners, they're concerned about civil liability, and they're concerned about exposure to regulatory agencies. Silvers hoped they would eventually grow comfortable with "the safe spaces" Homeland Security is trying to create.

Where Touhill offered a neighborhood watch metaphor, Silvers drew one from the Las Vegas strip: if a casino spots a card shark, it won't just kick him out, but will tell the other casinos, as well. Industry might do well to do likewise.

Cyber deterrence: groping through the history of deterrence.

While it's clear that most observers see cyberwar as a very real possibility (some see it as current actuality) it remains very unclear what actually counts as an act of war in cyberspace. The laws of armed conflict in cyberspace, along with their surrounding international norms, remain very much works in progress with respect to both jus ad bellum (when is conflict just) and jus in bello (how is conflict to be rightly conducted).

But everyone was clear that the United States faces clear adversaries in cyberspace. Russia, China, Iran, and North Korea were called out several times. China most participants regard as a special case. Sino-US cyber conflict is largely economic, and centered on the protection of intellectual property. Participants tended to regard Chinese interests here as threatening, but as fundamentally rational, and as amenable to management through diplomacy, negotiation, and international agreement.

The other three nation-state adversaries speakers named are a different matter. Russia, while rational, exhibits a general hostility to US interests that does not seem resoluble through negotiated agreement. Iran and North Korea represent similarly implacable although less conventionally rational adversaries. These states seem to call for a policy of cyber deterrence. Speakers were in general agreement on these points:

  1. Cyber deterrence is particularly challenging because of the lower barriers to achieving useful cyber offensive capabilities, the low cost of cyber operations, and the lack of consequences for employing cyber weapons.
  2. Cyber deterrence depends upon imposing costs, but the prospect of retaliation need not, indeed often should not, be confined to retaliation in kind. Deterrence should involve a full spectrum of possible responses, from "naming and shaming" through sanctions, through cyber operations, and even into kinetic operations. (The closing plenary session discussions offered one example of this—the panel expected ISIS information operations to be significantly degraded as the Islamic State continues to lose territory, and with it the physical and organizational infrastructure it needs to conduct online recruiting and inspiration.)
  3. Cyber deterrence requires certainty of attribution, capability, and credibility. It also, paradoxically, benefits from a degree of uncertainty.

Deterrence against non-state actors—"criminals, activists, and ideologues"—can be usefully treated as a law enforcement matter. This isn't trivial, but in outline it's a well-understood approach. With state actors matters are more difficult. Questions of credibility inevitably touch on questions of proportionality, and in these cases there are thresholds that will determine how far retaliation might go, and thus how credible the prospect of retaliation is. What are we willing to do, for example, in response to the doxing of a political figure? In response to manipulation of an election? Our understanding of cyber deterrence is, several speakers observed, roughly where our Cold War understanding of nuclear deterrence was in 1950. Yet in many respects cyber deterrence is conceptually more complicated. For one thing, cyber capability is multipolar, and domestic cyber threats are often inextricable from international ones. Cyber weapons are also more complicated than kinetic munitions. They're highly perishable, they can be immediate in their effect, they are less discriminating, and their effects are still highly unpredictable.

The CyberWire asked the panel on National Cyber Deterrence Strategy about the challenges of attribution: we hear much about a surprise attack—a "cyber Pearl Harbor"—but much less about the dangerous consequences of action based on mistaken attribution—a cyber Tonkin Gulf incident. The panelists agreed that the risks of mistaken attribution were high, and the potential consequences of such a mistake potentially grave. Lieutenant General McLaughlin, Deputy Commander, US Cyber Command, said that "there's huge risk if you were to generate a national response to some action that was misattributed. That's why we're held to such a high standard." But the level of attribution necessary may vary with the kind of response considered. Other panelists saw the principal risk of wrong attribution as harm to non-belligerents, or even allies. But all agreed that attribution was a difficult problem, and that one could achieve the sort of confidence in attribution necessary for deterrence only with a knowledge of the adversary that emerged over time and multiple instances of attack.

Most speakers were wary of drawing clear "red lines" that would trigger retaliation. A kind of desirable ambiguity might be developed from the principles underlying the laws of armed conflict. And, while resilience makes its own contribution to deterrence, the evolution of an effective deterrence regime awaits advances in international norms, technical resilience, and intelligence capability.

One CEO's perspective—Ntrepid's Richard Helms.

Finally, we were able to talk with one CEO at the conference—Ntrepid's Richard Helms—for his perspective. Helms draws upon a background in both industry and the Intelligence Community. Addressing issues of defense and deterrence, he noted that everyone, including especially the adversary, has finite resources. Whenever you can get a threat actor to expend effort fruitlessly on a failed cyber operation, whenever you can get them to waste their time, you've gained. You'll not only have protected an asset, but you'll have effectively imposed opportunity costs on the attacker.

He also offered his take on encryption. In Helms's view, "encryption, not backdoors, protects the public." He thinks it unwise to leave the public unprotected for the sake of easier law enforcement. He also thinks the Intelligence Community is up to the challenge of dealing with cyber threats without resort to mandated backdoors.

The Congressional report on the Office of Personnel Management (OPM) hack having been released that week, Helms noted that the Government was offering the victims of that breach protection against identity theft. But that, he said, was irrelevant. The hack was almost certainly state-sponsored, and the state that sponsored it isn't interested in the kind of criminal profit to be made from identity theft. (Indeed, there's been little sign of personal data stolen from OPM turning up for sale in the black market.) Rather, the state that hacked OPM is "interested in the people working on programs that state is interested in." In this case these are for the most part people with clearances. The people exposed in the OPM breach can expect a different kind of attention from the hackers—their own endpoints, not their identities, are the things primarily at risk.

Helms therefore makes his own offer on behalf of Ntrepid: if you were affected by the OPM breach, you can sign up for a free year of the company's Passages product. Passages promises safe browsing by virtualizing the browser and rebuilding it for every session. "Since 90% of the malware gets in through the browser," Helms says, this affords more relevant security than do identity-theft protection products. Anyone affected by the OPM breach who's interested in Passages may sign up for it at www.ntrepidcorp.com/passages/breach.