skip navigation

AI can deliver security, but it needs securing itself.

There are, a panel at the 10th annual Billington CyberSecurity Summit pointed out on September 4th, 2019, two sides to artificial intelligence in cybersecurity: AI's use in cybersecurity, and the cybersecurity of AI systems themselves.

Jack Shanahan, (Director of the US Department of Defense Joint Artificial Intelligence Center) described a challenge the Government has with artificial intelligence and data. The Government has collected data from the earliest days of the republic (in, to take the most obvious example, the Constitutionally mandated census taken every ten years). But that collection obviously didn't assume the data would be used with artificial intelligence. Commercial businesses like Amazon, Google, and Facebook aren't in this position. They don't have two centuries of legacy collection to reconsider.

Dean Souleles (Chief Technology Advisor to the US Principal Deputy Director of National Intelligence) noted that a major problem with artificial intelligence is that we don’t really know what ‘normal’ is, and without some such baseline, it's unclear how we might detect anomalous behavior. Lynne E. Parker (Assistant Director of Artificial Intelligence, White House Office of Science and Technology Policy) raised the question of data integrity as a problem that grows sharper with the deployment of AI. Data poisoning attacks are a very real threat, and ensuring that data are trustworthy is a challenge,

The sheer historical novelty of the challenges makes them particularly difficult to address. Weighing in from the private sector, Swami Sivasubramanian (Vice President, Amazon Web Services) compared the present stage of development of machine learning to the internet. "If the internet is still in Day 1 after 30 years, machine learning just awoke and hasn’t yet had a cup of coffee."