Artificial intelligence and machine learning dominate so much conversation about cybersecurity that any CISO is faced with the necessity of explaining this family of technologies to the board. This is always challenging, especially with technologies so heavily hyped, and so liable to easy misunderstanding.
As one panelist (Sriram Chandrasekar, Co-Head, AI Investments, Point72 Venture) put it, his role as a venture capitalist is to discern "the faint shimmer of snake oil" that so often rides atop presentations about artificial intelligence." The panel took up the task of framing AI in ways that would be accessible to boards, and that would give them a realistic sense of what AI is, does, and doesn't do.
The panel was moderated by Dr. Reggie Brothers, Chertoff Group Principal and former senior science and technology executive at both the US Departments of Defense and Homeland Security. In addition to Chandrasekar, the panelists included Walid Ali (Senior Director, Artificial Intelligence & Cloud Solutions, Intel AI Products Group), Melissa Flagg (former US Deputy Assistant Secretary of Defense for Research), and Bob Griffin (Chief Executive Officer, Ayasdi).
How should we understand artificial intelligence?
What, then, might serve as a useful introduction to AI and machine learning? Griffin, whose company Ayasdi offers a platform for AI development, said that AI must be able to do discovery, and achieve observational understanding with no supervision, or at most with minimal supervision. AI must be able to predict. It must be able to do justification. It must be able to act, and finally if must be episodic: it must learn. All machine intelligence should exhibit these capabilities.
Ali explained that AI is unusual in that it accrues value at an early stage, even with incomplete data. Thus it poses a challenge: "Where should I jump on the bandwagon?" Looking at the field as a whole, he thought that the course of AI resembled an intensified version of the path data compression took .
How should enterprises buy artificial intelligence, and how should vendors sell it?
Flagg began by characterizing the way artificial intelligence appears in Defense acquisition programs. She thought that the people tend to misunderstand the Department's approach in a wayward attempt to treat disparate problems as one. AI is in some cases still research, but in other cases something a vendor can sell to the Government. "Innovation and acquisition," she stressed, "aren't the same." The Defense acquisition system is massively complex, and it imposes high standards before it permits systems to move from research and development into acquisition. Her advice to vendors in particular was to be clear on where you stand with respect to your product's readiness. The obverse of that advice holds equally for CISOs evaluating AI products and solutions—understand their maturity before committing to them.
Requirments, of course, loom large in Defense acquisition, but Flagg thinks it's an interesting time to create red teams in AI, rather than waiting for requirements. The field is fast-moving, a complex combination of research and practice.
Chandrasekar spoke about the technologies from the perspective of a venture investor. He noted that most algorithms aren't proprietary, and that in AI data moats were highly valuable. He also stressed the importance of attracting the right people, as solid performers will eventually be able to move technology in a useful direction.
Artificial intelligence use cases.
There's no want of problems to work on, in Griffin's view. He thought reduction of false positives an obvious area that requires work. The opportunities to apply AI and machine learning to detection and prevention of financial crime are obvious, and he thought it also possible to develop use cases quickly in healthcare.
Noting that while "you can't start working on a problem until you understand it," Chandrasekar agreed that oo often we're dependent on someone articulating a requirement before we begin working to define and address a challenging problem.
Data themselves are a supply chain for AI. Understanding that underlying supply chain, in Ali's view, is essential to the success of AI products, and trust in that supply chain is key.
Griffin agreed, and noted that, while content may be king, "access and distribution of content is really king." Delivering solutions to the point of need is in his view everything, a point that Flagg took to have special relevance in Defense contexts: "You've got to work at the speed of the fight." Griffin added that "skills upgrade is a force multiplier," and that the ability of AI to enhance the skills of human operators is one of its most important features.
Limitations and misunderstandings.
The ability to handle unstructured data, in Chandasekar's view, is what makes deep learning so valuable. But we should be aware of the probably limitations of AI. Chandresekar thought AI well-adapted to working with images, but much less so to working on text. "Text has so much nuance."
And this prompted questions and objections from the audience. How much of the field comes down to buzzwords? How easily can machine learning systems be spoofed? And what's the ground truth about machine learning and security?
The panels' answers were nuanced and context-dependent. Ali advised CISOs to figure out whether the problem the vendor solves maps to your problem, and maps to you data.
Chandrasekar offered a rule of thumb. He thought AI generally missapplied where people are doing a job. It finds its best application where there are more data than people can handle, and where those data aren't being used. "The vast bulk of video isn't being looked at, nor are the vast majority of packets. That's where AI and machine learning can play."
Griffin thought that AI will serve security well in threat hunting. He also thought it will inevitably play a role in hacking back—an ambivalent role, given the dubious nature of hacking back, but a role nonetheless.
This prompted a final question about "killbots." How close is the US Department of Defense to letting AI make kill/no-kill decisions? Flagg took the question and dismissed it as misleading. (She deplored, in an aside, an implicit characterization of her as "the killbot lady" that appeared last year in Maxim, of all places.) She thought that AI won't be delegated that kind of decision. (In this she agrees with other observations from Government and industry such as this one from a General Dynamics executive, who said that automation in land combat systems stops with the decision to fire a weapon.) Rather, AI will increase the precision with which weapons are used. "It's a false question—we won't release killbots. Instead, AI will increase precision, and reduce collateral damage."
A historical note.
This may be skipped by anyone who's not interested in high scholasticism or eccentric Franciscan tertiaries, but we want to give due credit to the panel's moderator sof well-informed historical perspective. Any panel whose facilitator begins with an allusion to Ramon Llull as the intellectual father of artificial intelligence pretty much has us at hello, or, more precisely, at "The first proposal for artificial intelligence was described in A.D. 1305." So thanks to Dr. Brothers for his discussion of Ramon Llull (1232-1316, theologian, mystic, and Catalan troubadour) as the first thinker known to have devoted serious attention to theory and speculation about artificial intelligence.
Brothers briefly described Llull's "paper machine," essentially several rotating disks that Llull believed could automatically generate all true propositions through systematic manipulation and juxtaposition of a relatively small number of metaphysical principles. You may have seen children's toys, Llullian in spirit, in which you rotate disks to produce a variety of different faces. That's a very low version the device described in the Ars Magna. It's worth noting such precursors as Llull great art, especially in a field where so many of our decisions and perceptions are dominated by pictures of what must be, might be (or really isn't at all).