Several trends drew the attention of panelists and speakers at ITSEF. Two of them, resilience and the burgeoning Internet-of-things, we'll consider separately. The others we'll summarize here.
Cryptography, quantum computing, and risk management.
The opening panel on ITSEF's second day, March 8th, 2018, took up the future of cryptography, clearly one of the central families of technology that will continue to shape cybersecurity. Moderator Taher Elgamal (CTO, Salesforce) asked Martin Hellman (Professor Emeritus, Stanford University) to open the discussion, and Hellman did so with a review of the recent history of cryptography.
He drew from that history a warning-since 1990 we've seen no significant advances in factoring. This, however, doesn't mean there's no problem, and that encryption is a solved problem. We can't let stasis make us complacent, he argued. Most current talk about the need for new forms of encryption arises from concerns about quantum computing. Hellman thinks this is mistake, and that the field needs to rethink its approach to risk. (In this need he sees a significant similarity to complacency about nuclear deterrence.)
Bob Blakley (Global Head of Information Security Innovation, Citigroup), agreed that complacency was a pitfall. He also cautioned that we shouldn't neglect systems that will stand for three years just because they won't last for thirty.
"Crypto naturally degrades over time," was Brian LaMacchia's warning. LaMacchia (Distinguished Engineer, MIcrosoft) argued people working in the field haven't done a good job of addressing this degradation, which occurs whether or not there are breakthroughs like quantum computing.
So far as is known, no one has yet developed an operational quantum computer of the kind that people fear will render existing cryptography ineffectual. Many organizations, including several nation-states, are hard at work doing so. How will we know when someone has succeeded, especially if they're concerned to keep the breakthrough quiet? Blakely suggested that "Bitcoin is the canary in the mine that will tell us when someone has developed a quantum computer."
Artificial intelligence and machine learning.
In a panel moderated by Jim Pflaging (Principal, the Chertoff Group), venture capitalists offered their realistic take on the hot topic of artificial intelligence. Sri Chandresekar (Co-Head, AI Investment, Point 72 Ventures) opened a panel discussion with an overview of the term's history, which he traces back to 1956. When they think of AI, he said, "Many people think of systems that can pass the Turing test, that is, an artificial human, that is, general AI. Machine learning is a subset of AI. Deep learning is a subset of machine learning." So he urged people hearing about "AI" or "machine learning," particularly when they hear about it from vendors, to "be willing to dive in a little more."
Rama Sekhar (Partner, Norwest Venture Partners) also saw hype as a major barrier to understanding. "There's so much marketing using 'AI.' This is the same movie we've seen before: AI has succeeded big data, which succeeded mobile. AI as a market will go down the same path." He suggested that artificial intelligence is more ingredient than field.
As an ingredient, what are its use cases? "AI can be used for finding maliciousness and for automating processes," observed Ken Gonzalez (Managing Director, Trident Capital Cybersecurity). "We'll see a lot more of the first. The second bucket, automating processes, will be designed to improve processes." But there's also a third use case for AI, this one more baleful, and that's the use "the bad guys have." We're seeing considerable innovation in AI on the part of bad actors as well.
To Pflaging's question about the possibility that we're in an AI funding bubble, Jon Sakoda (General Partner New Enterprise Associates) reframed the question as one of whether the field is overfunded. He thought not. In fact, it's underfunded. It's both difficult to do and one of the technology trends companies can take advantage of. "I ask an entrepreneur, is this something humans were doing, and can this enable us to have to use fewer humans? If the answer is yes, then it's interesting," Sakoda said. He also noted that AI is overkill for many of the places people would deploy it. It can be very expensive, and companies should ask, when presented with an AI solution, if they really need the deep learning on offer, or if they could solve their problem with basic statistical regression.
To investors or corporate acquirers presented with an AI company, Gonzalez offered this advice: "The sweet spot is the bad guy does something the startup can stop. Acquire-and-integrate is a consistent theme in big companies. Did the acquired company solve a problem relevant to my customers that I can integrate? If so, then the large company may look at buying a startup with, say, $10 million in revenue."
"Blockchain" is, with "AI," another word companies like to conjure with. Here the jury is out. Pflaging asked what might be the early signs of there being an investable market in blockchain. Sakoda answered with a parable. You know what it's like at RSA? You talk to a company and they urgently tell you what they're selling, and it sounds great, but you're not quite sure what they've got? And then you go to another company and hear something that sounds about the same, but with some different words? And so on? Well, "blockchain is like RSA in Bangkok."
More seriously, he saw the blockchain as "an exciting and enabling technology" whose first killer app is clearly cryptocurrency. "But in what other areas does it really work? To be sure it's decentralized and very hard to corrupt, but it's unclear what significant use cases it will find outside of cryptocurrencies. Thus it's an interesting technology and one that bears close watching, but the sector has yet to find more general applications for blockchain.