Securing the car through vulnerability testing and coordinated disclosure programs.
Earlier speakers had seen much promise in the white-hat community. This panel turned to an insider's view of that community and its practices: crowdsourcing, bug bounties, responsible disclosure, etc.
Dan Lohrmann (CSO and Chief Strategist, Security Mentor (Former CISO, State of Michigan)) moderated the panel, which included Marten Mickos (President and CEO, HackerOne), Casey Ellis (Founder and CEO, Bugcrowd), and Titus Melnyk (Senior Manager – Security Architecture, Fiat Chrysler Automobiles US LLC).
A role for white hat hackers (and the bug bounties they'll work for).
The panel was interested in fleshing out a role for white hats. Bughunters are paid a bounty for results, and they can return them very quickly and very affordably. Mickos noted that the "Hack the Pentagon" program returned its first result in just thirteen minutes. Melnyk, speaking as a member of the automobile industry, said that Fiat Chrysler sees its bug bounty program as the right evolution of its program at the right time. Ellis noted the importance of coming to some "sane starting point" in putting out bounty programs, and then earning trust. He also spoke to the bughunter's motivation, which isn't narrowly, immediately, financial: a successful white-hat hack is a serious credential in what's a very meritocratic space.
Given all of this, Lohrmann askes, "why do companies resist bug bounties?" Mickos thought the issues are largely cultural. Companies shouldn't resist them. The bug bounty is a proven way—"the only proven way"—of finding flaws. But he thinks a cultural shift is underway. Ellis agreed, citing the need to overcome the popular perception of the hacker as the bad guy.
As was appropriate to a session on the wisdom of crowds, the panel soon turned to the local crowd for questions, the first of which was, why do researchers object to the word "responsible," as in "responsible disclosure?" Ellis said it's because "responsible" has a moral load to it. Many researchers prefer "coordinated disclosure." (Mickos interjected that he's tired of complex names, rants entertainingly about the damage squabbling over vocabulary does, etc., upon which Ellis apologized to some laughter for being part of the problem. Still, the audience got the point. To some laughter.)
A questioner asked how you might vet bughunters. Mickos answered that the bad guys are already there. "We're bringing in good guys, and there are more good guys than bad guys." It's also important to note that you only pay for good results, and that "bounty programs know their hackers better than companies know their employees." Melnyk pointed out that in a bounty program you also set some boundaries, like "stay out of people's PII." Ellis argued that "everyone in this room will be using bounties because it's the most efficient way of doing things." It's also important to engage the bughunters early in the design process. Mickos argued that we need to feed what we find in live software back to the designers.
To a follow-up question about whether there was sufficient identity protection in bug bounty programs, Ellis said that you do need a degree of verification and trust. "We should evolve toward an open conversation, but we're not there yet. Verification relaxes as trust improves."
"Openness, not secrecy, fosters security."
Should industry move to open source? Mickos was unambivalent: "Yes. Openness, not secrecy, fosters security."
At the end of the Summit we were able to catch up with one of the panelists, Bugcrowd Founder and CEO Casey Ellis. He described Bugcrowd as offering a platform on which customers can run competitions among tens of thousands of white hats for vulnerability discovery. The hackers do it for both glory and cash.
Ellis started in penetration testing, then moved to solutions. He saw, in that work an imbalance between supply and demand, and he also saw that the relatively low supply of pentesters led to meagre results—you only needed reach a low level of effectiveness to win business precisely because your skills were in such demand. The economics, Ellis said, didn't favor results. And so people began to wonder whether they might use the adversaries' model—the black market, after all, serves as a kind of crowdsourced research and development resource for criminals.
Ellis sees the obstacles to using that model as coming down, fundamentally, to issues of management and trust, and that such issues can be overcome. Begin by realizing that customers have widely divergent risk tolerance. Bugcrowd begins by assessing the skills and trustworthiness of the researchers. (He noted, in an aside, that those skills vary widely, and that researchers across the spectrum of skills can all play a part in crowdsourcing. There are some "super hunters," perhaps about twenty-five individuals worldwide, who can earn in the vicinity of $250,000 a year from bounties. Most, of course, earn far less than that, but they still earn enough to meet their lifestyle needs and even aspirations.)
Bug bounty programs are, Ellis believes, the future of pentesting. Pentesting right now is generally under-resourced, and high-end, highly skilled researchers are employed doing commodity vulnerability discovery. That's better handled by the crowd. The high-end talent is better employed on triage, management, red-teaming, or defensive architecture.
We asked him about the legal liabilities surrounding bug bounty programs. He thought this area remained poorly understood, but he also believed that it posed no insuperable obstacles to bughunting. "This is," he concluded, "going to happen, in some form." Getting the white-hat hackers to swarm the way the black hats do is the only way, he argued, to stay ahead of the adversaries. Companies need to handle trust issues in many areas, and Ellis was confident they'd succeed in doing so here as well.