The Vulnerability Equities Process: disputed questions.
The National Security Agency currently provides the Executive Secretariat for the Vulnerability Equities Process. NSA
N2K logoOct 21, 2016

The Vulnerability Equities Process: disputed questions.

Stephanie Pell (Assistant Professor and Cyber Ethics Fellow at the Army Cyber Institute) moderated a panel discussion of the US Government's vulnerability disclosure practices. The panelists included Dave Aitel (Founder, President, and CEO of Immunity Inc.), Steven M. Bellovin (Professor of Computer Science, Columbia University), and Ari M. Schwartz (Managing Director, Cyber Security, at Venable). Pell set the discussion up by the going-dark/crypto wars debate. She described "Playpen," a court-approved use of malware to exploit a vulnerability that enabled law enforcement to identify customers who frequented a dark web child porn site. Mozilla filed a motion to compel the FBI to disclose the vulnerability so that Mozilla could patch it to protect Firefox users. This, she suggested, is the sort of issue the Government's Vulnerability Equities Process (VEP) was designed to address. She then turned the floor over to each panelist in turn.

Pro: the history and goals of the Vulnerability Equities Process.

Schwartz began with some history. Until October 2015, he served as senior director for cybersecurity in the US National Security Council, and so he was in a position to observe the VEP's implementation and evolution. The program began in 2008 with National Security Policy Directive 54. This established the Comprehensive National Cybersecurity Initiative, and it required various agencies to share and disclose vulnerabilities. But the Government got truly serious about the process in the post-Snowden era. The general consensus was that the default posture should be disclosure. NSA was designated the VEP's Executive Secretariat. An Equities Review Board (ERB) was established to decide whether to keep or disclose vulnerabilities.

The VEP was itself publicly disclosed after the Heartbleed vulnerability came to light. Michael Daniel, Special Assistant to the President and Cybersecurity Coordinator, blogged in detail about how the VEP worked—the US Government was widely and falsely believed to have known about Heartbleed, Schwartz said, and felt it important to set the record straight.

There are now recommendations out to revise the VEP. These include:

  • Formalize Government-wide compliance.
  • Publicize high-level criteria for disclosure or retention.
  • Clearly define the process followed in making disclosure decisions.
  • Ensure that decisions are subject to periodic review.
  • Prohibit agencies from entering into non-disclosure agreements with respect to vulnerabilities.
  • Move the Executive Secretariat from NSA to the Department of Homeland Security.
  • Direct periodic public reports by the Executive Secretariat.
  • Expand Congressional oversight of the Government's use of vulnerabilities.
  • Mandate oversight by independent Executive Branch bodies.
  • Expand funding for both offensive and defensive vulnerability research.


Stephanie Pell, Assistant Professor and Cyber Ethics Fellow at the Army Cyber Institute at West Point, seated left, moderates a panel discussion of the US Government's vulnerability disclosure practices during the CyCon U.S. International Conference on Cyber Conflict in Washington D.C., Oct. 22, 2016. The other panelists include Dave Aitel, founder, president, and CEO of Immunity Inc., Steven M. Bellovin, Professor of Computer Science, Columbia University, and Ari M. Schwartz, Managing Director, Cyber Security, at Venable). #CyConUS16. U.S. Army photo by Sgt. David N. Beckstrom

Con: "a PR sham that only hurts US interests and is in general a bad idea."

With that Schwartz yielded the podium to Aitel, whose views of the VEP were decidedly negative: "The VEP is a PR sham that only hurts US interests and is in general a bad idea." Many of its nominal goals serve as a transparency exercise for confidence building with the US software industry, and seek to address systemic insecurities, but they really provide cover for fixing vulnerabilities used by adversaries (maybe), "but for sure feed incredible amounts of lawyers."

"Disclosing vulnerabilities to companies makes them like you a lot less," Aitel said. "It's hard to address insecurities when you're doing stuff that undermines trust." Initiatives like the VEP only create noise. "You have a situation in which we've tried to cover insecurity with a thin process that feeds lawyers." Consider, he asked, the fatal vagueness of many of the concepts the VEP tosses around. What's a zero-day anyway? It's complex. It's got to be an exploit you know, and that you know nobody else knows exists. You've got to test it in the wild. The decisions the VEP says it's making are unbounded and very complex, dealing with unbounded uncertainty in the three dimensions of defensive operational risk, technical understanding of the issues involved (and of all the related issues), and offensive operational opportunity. Addressing each of these, Aitel insisted, would require many man-months for any single bug. Thus the VEP is not designed to be real and cannot be real—it's just PR.

It also carries many costs, OPSEC (operations security) prominently among them. "Always imagine your adversary can go back in time to analyze past traffic and host behavior for known signs of compromise. So the best posture is to use every 0-day only once. Second best is using it on only one target set and with one toolchain. Worst is using it briefly on all your target sets, and then releasing it." And he suggested that the VEP leaves us in the worst posture.

Attribution is very important, but we give up attribution when we give up bugs. AItel thinks, for example, that Microsoft Security is probably compromised. "It's in India. Don't think the Indian Government has missed out on its opportunity." Bugs don't always get fixed properly. Smart companies introduce mitigating factors that don't depend on patching. From the size of our effort on certain products, our capabilities become known.

Aitel argued that vulnerabilities and exploit technologies are not commodities. They are also linked. "A crappy VMWare escape is right next to a really good one, and in the same subsystems as the one you'll be using five years from now." Exposure of an attack surface can sometimes be more damaging than exposure of one bug. Exploitation techniques—the fact that you know you can exploit a particular vulnerability—may not be something the adversary knows. The same is true for bug classes (yet "nobody is asking for a math equities process; nobody asked what the overlap between our bugs and our adversaries' bugs was in the first place"). We have a national interest in assuring that the commerce of the world is secure, but we don't have an obligation to do quality assurance for the richest companies in the world.

It's impossible to project the future technology used by our target sets. We don't know the level of painful exposure we'll have later. "We need a massive head start on adversaries to account for Hal Martinesque operational leaks or sudden defensive advances."

Aitel claimed another "indirect but horrible consequence" to the VEP: long-term operational uncertainty leads to underinvestment in critical strategic areas, and, he added, enforced outsourcing of our most sensitive research and development.

Attempted fixes and recommendations from professional operators (and Aitel recommended seeing his discussions of these at Lawfare) are unlikely to help. Instead, the Government should kill the VEP and try another, non-sham confidence-building measure instead, addressing systemic risk with hardening and not vulnerability-finding. Use Einstein 3 instead.

He closed with a quick list of "additional ideas not worth implementing": bugs with defined timespans, moving the function anywhere except NSA/CyberCom, continuing and codifying VEP process, limiting the number of bugs we hold ("this would assume we didn't don't really want to continue the mission"), and making massive policy on sensitive areas of information security without thinking about the collateral damage. It's time, he concluded, to go back on the VEP.

"Imperfect but useful."

Bellovin took the floor last, and occupied a middle ground. He characterized the VEP as imperfect but useful, and said that, while much of what Aitel said was correct, not all of it was. Intelligence certainly matters, and the Intelligence Community's ability to function is and will remain important: it has a mission to protect. "Spying will end sometime after a sustained outbreak of world peace." But defense matters, too. Attacks can be prevented. Multiple parties can purchase the same exploit. Exploits can be stolen as opposed to leaked.

He took mobile security as an example. Some 80% of the mobile apps studied get the cryptography wrong. "We've got to take some of these vulnerabilities and disclose them, at the proper time to the proper parties." The hacker community may not know the kind of vulnerabilities the intelligence community finds interesting. Even if that's true, the ability to find those holes is out there. You redirect hacking skills against the appropriate objectives. Take Stuxnet—some of the vulnerabilities it exploited weren't new, and weren't 0-days.

"You've got to find a balance," Bellovin said. "I'm not saying immediately disclose everything. The exploit of course is to gain access. The payload is not the same as the exploit. The payload should not be disclosed. The bad guys know this. It's well-understood technology—they have loader-dropper architecture." Consider wiretap law as a possible model—at some point the tap must be disclosed. We should balance the risks to national security to decide when to disclose. We don't know whether the VEP is actually fulfilling its purpose—to make systems more secure. If it's not working well, we should ask how we can fix it?

Debate and a middle ground—disclosure in the face of clear and present danger.

It's worth noting, if this isn't sufficiently obvious from the account so far, that the panel's discussion was marked by considerable heat, especially in the sharp exchanges between Schwartz and Aitel.

Schwartz took strong exception to Aitel's presentation. He returned to Daniel's blog about the VEP—the process was disclosed after Heartbleed, when the Government was accused of hiding what it knew, and it's therefore nonsense to dismiss the effort as mere public relations. It had been in existence for years before its disclosure, and therefore couldn’t have been conceived as a PR ploy. "If you care at all about defense," Schwartz argued, "Aitel is wrong."

"The only time to disclose a vulnerability," Aitel responded, "is when it becomes a clear and present danger." That's not leaning in the direction of disclosure, and that's also not the present policy. You should answer how likely it is someone else will discover the vulnerability before you make a disclosure policy. Of course defense matters, but vulnerabilities are the wrong path to approach it. What you can and should do instead is education: that's how you communicate with a partner. Defense is very important. But the vulnerabilities we find in the government (in very expensive ways) aren't for defense. They're for offense. What we have here is something that isn't effective, cannot be effective, and should be stopped. And he said, again, that the Government should forget about the VEP and instead extend Einstein 3.

Bellovin disagreed that Einstein 3 would provide the answer. "Trying to sell the American public that there should be more explicit monitoring is a hard sell. It's a complete and total non-starter legally, politically, and morally."

Schwartz interjected, "The whole industry is moving away from signature-based solutions, and yet you [Aitel] want to move in that direction."

Bellovin continued by agreeing that there isn't enough technical expertise informing decision-making about disclosure. Both technical and operational understanding are required. (He would hesitate to let only the NSA provide that expertise. "They're an interested party." A devil's advocate could offer valuable input into the VEP.)

Pell asked the panelists if they thought we needed research into the adversaries' discoveries. Aitel thought that, while rediscovery of vulnerabilities undoubtedly occurs, he thinks it looks relatively rare, and he thinks little research into the question has been done. (Schwartz objected, saying strongly that Aitel had "no idea" what research had been done.) Bellovin noted that Dan Geer has phrased this question differently—are vulnerabilities dense or thin? Aitel answered that we don't know ("and Ari doesn't know or he'd tell us").

Questions and objections: reuse, automation, and shenanigans.

Don't criminals reuse disclosed vulnerabilities?

"The bad guys happily reverse patches people don't apply," Bellovin answered. "Patching is a serious and unsolved problem." Aitel thought the problem of reuse was a point against the VEP: "If there are classes of bad guys who reverse engineer and reuse vulnerabilities, isn't that an argument against disclosure?" Schwartz agreed with Bellovin on patching. In his current work, he sees that companies that have recently made many acquisitions are the ones who have the most trouble patching.

I haven't heard "automation" from any of you. Hasn't the DARPA grand challenge shown us that we're soon going to be talking about relative speed of discovery and patching, as opposed to legal disclosure?

Aitel thought it Interesting that DARPA's Grand Challenge also had an equities process built into it. Bellovin called the Grand Challenge interesting research, but reminded the audience that it was research, and was a couple of decades away, maybe, from being reduced to practice. "It's like the self-driving car, only worse, because in security you're working against an intelligence adversary who adapts." Schwartz pointed out that you can speed up the process of discovery and patching, but that wouldn't help the companies who who don't apply the patch.

In disclosure, we're revealing substantial amounts of research. If we're just dealing with patch creation, should that process turn on vendors' patching? And how do you place a value on disclosure of a vulnerability to the vendor?

"We don't know the answer to either," Aitel said, "and we haven't even begun to study it." Schwartz insisted that what a company does with the information you give them is a separate matter from the VEP itself. "If I'm a foreign power and find some exploit in a vendor's software," Bellovin said, "I'm going to hack them to see if they've got any interesting disclosures they haven't acted on."

[From a Microsoft person in attendance.] I've gotta call shenanigans on some stuff that's been said. The Microsoft security center isn't in India, for example. The rediscovery issue is by definition only for the private bodies. How do we get enough public understanding to make intelligent decisions?

Bellovin thought the best practice would be to use some existing oversight body as a model, and to remember that the decisions taken would necessarily rest on highly classified intelligence. Schwartz agreed. Aitel made a final call for more research—we lack, he said, the ability to make valid decisions based on data because we simply don't have the data. Schwartz concurred that indeed, more data and more technical knowledge would be good things, and could only enhance the Vulnerability Equities Process.

With that the panel concluded. It's pleasant to note that Aitel and Schwartz parted with a handshake that gave every appearance of warmth.