The New Cyberattack Surface: Artificial Intelligence - Know your threat
By Malek Ben Salem, Accenture
Aug 3, 2020

An introduction to this article appeared in the monthly Creating Connections newsletter put together by the women of The CyberWire. This is a guest-written article. The views and opinions expressed in this article are those of the authors, not necessarily the CyberWire, Inc.

The New Cyberattack Surface: Artificial Intelligence - Know your threat

Organizations are increasingly using AI/M in autonomous systems. But AI without security and safety safeguards can have nefarious consequences and can erode customers’ trust in an organization, potentially impacting its future business performance. Adversarial AI can cause machine learning models to misinterpret inputs into the system and behave in a way that’s favorable to the attacker.

To produce the unexpected behavior, attackers create “adversarial examples” that often resemble normal inputs to the model, but instead are meticulously crafted to cause the model to produce the wrong output.

Attackers typically create these adversarial examples by developing models that repeatedly make minute changes to the model inputs. Eventually these changes stack up, causing the model to become unstable and make inaccurate predictions on what appear to be normal inputs.

What makes adversarial AI such a potent threat? In large part, it’s because if an adversary can determine a particular behavior in a model that’s unknown to developers, they can exploit that behavior. There’s also the risk of “poisoning attacks,” where the machine learning model itself is manipulated by poisoning the data that are used for training it.

Secure your AI models – time to get started.

While AI attack surfaces are only just emerging, business leaders’ security strategies should account for adversarial AI, with an emphasis on engineering resilient modeling structures and strengthening critical models against attempts to introduce adversarial examples. The most immediate steps they need to take include:

Step 1 – Conducting an inventory to determine which business processes leverage AI, and where systems operate as black boxes.

Step 2 – Gathering information on the exposure and criticality of each AI model discovered in Step 1 by asking several critical questions, including:

  • Does it support business-critical operations?
  • How opaque/complex is the decision-making for this process?

Step 3 – Prioritizing plans for highly critical and highly exposed models, using information you acquired in step 2, and creating a plan for strengthening models that support critical processes and are at high risk of attack.

Create robust, secure AI.

Business leaders need to combine multiple approaches to ensure robust, secure, and safe AI. Our research reveals four essential steps:

Rate limitation: By rate-limiting how individuals can submit a set of inputs to a system, an adversary’s effort is increased. That’s a deterrent to adversarial attackers as they try to interact with the machine learning model in order to learn more about it.

Input validation: With a focus on what’s being fed into your machine learning models, and by making modifications and simplification when possible, cyber defenders can “break” an adversary’s ability to fool a model.

Robust model structuring: The structuring of machine learning models can provide you with some natural resistance to adversarial examples. Complex model structures make them more susceptible to adversarial AI. Simpler model structures are more robust.

Adversarial training: If enough adversarial examples are inserted into data during the training phase of the model, a machine learning algorithm will learn how to interpret them, and will be robust against adversarial attacks that use these examples during the inference phase.

To learn more about adversarial and trustworthy AI, and how to protect your AI attack surface please read the full Accenture Labs report.

About the author:

Malek Ben Salem is a market-focused technology leader, driving strategic and innovative Security and AI capabilities for clients. She leads Security R&D for Accenture in North America. Outside of her work, she focuses on developing ethically aligned design standards for IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems.