Black Hat has wrapped. The event was an occasion of expected hype but also some introspection by the security sector. The initial keynote by Google's Parisa Tabriz urged those in attendance to commit to the long work of enhancing security by working through fundamental causes, picking well-thought-out, achievable objectives, and working toward increased collaboration with those outside the security industry.
Tabriz, who leads both Chrome security and Project Zero at Google, offered what amounted to a plea for well-structured, modestly hyped, and disciplined engineering. And there did seem to be some introspection going on, albeit mediated by more noise than a state fair's midway. Curiously, the barkers' pitches in the booths that packed the exhibit floor seemed more modest and introspective than did many of the briefings, which tended toward spectacle and alarmism: the Martians have landed and the Man is out to get you.
If there was one theme that emerged from listening to the barkers (who, it must be said, were often quite interesting) was that the industry recognizes one of the first principles of North American economic reality: capital is cheap and labor is expensive. The solutions they pitched offered to save the users' time. That's not simply time to detection or time to response, but the time employees would need to commit to using the solution, defending an enterprise, or remediating an attack. The solutions on offer also promised that they would de-skill some of the more advanced forms of technical expertise, thereby enabling junior analysts and other personnel to function at higher levels.
Artificial intelligence was, as expected, very much a presence on the floor. The vendors offering artificial intelligence and machine learning were too numerous to count easily. There was some healthy skepticism about the larger and more extreme claims for AI. We stopped by one of the leading AI security firms, Cylance, well-known for its commitment to using artificial intelligence in security solutions and asked if they would claim complete detection of unknown threats with mathematical certainty. Their quick, direct, reassuring (and justifiably irritated) answer was, "Of course not. No one can do that. It's impossible." But that AI has considerable and transformational utility in security seems beyond question: perfect insight and omniscient detection aren't preconditions of usefulness.
One vendor that wants very much for people to understand why algorithmic certainty is impossible with respect to detection is Comodo. They were keen to explain that detection of unknown threats is a formally undecidable problem, a fact they think is insufficiently appreciated. Their alternative to what they would describe as naive and dangerous reliance on machines is default-deny protection coupled with default-allow usability. Comodo issued, this morning, what it calls a Zero-Day Challenge, inviting AV users, endpoint security vendors, and others, to submit any malware sample of their choice. The company will run it through its Valkyrie verdicting engine to see if the sample passes through. Comodo promises to publish Valkyrie's failures as well as its successes. The company's CEO Steve Subar views the challenge as a contribution to cutting through what he sees as industry hype. He also sees it as a contribution to better, more transparent testing of tools and services.