Developing AI tools that can enhance cybersecurity.
N2K logoOct 31, 2023

The "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" envisions fostering and organizing AI to mitigate cyber threats and vulnerabilities.

Developing AI tools that can enhance cybersecurity.

Much discussion of the Executive Order (EO) on artificial intelligence (AI) has focused on the ways in which AI poses potential threats, and on how such threats might be averted before they become realities. But it also discusses the ways in which AI can make a positive contribution to security.

Using AI to enhance software and network security.

The White House Fact Sheet on the EO envisions a national program to harness artificial intelligence in the service of cybersecurity. One of the EO's goals is to "Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure."

Andrea Carcano, co-founder and CPO of OT cybersecurity leader Nozomi Networks, agrees with this aspect of the EO. “It's critical for governments to find ways to accelerate the ways AI can be used for good, while taking the necessary steps to deter its malicious use and negative outcomes. We know criminals don’t play by the rules, and those looking to attack our critical infrastructure are already leveraging AI technology in attempts to do harm. Therefore, it’s equally important to focus on understanding what’s needed to strengthen our defenses against AI-enabled cyberattacks. We believe AI-powered defense will continue to play a critical role in that effort.”

Avani Desai, Chief Executive Officer at Schellman, advises some first steps toward preparing for AI-enabled adversaries. “To prepare for wider adoption of the AI systems in cyberattacks, such as AI-enabled fraud and deception, first, start from the basics and make sure you have a strong control environment, for instance, robust authentication measures, performing penetration testing on a frequent basis so you can identify vulnerabilities, and regular training and awareness for employees.  Second step should be to perform tabletop exercises to make sure your incident response and business continuity strategy are in place, work well, and if there were gaps make changes in real time.  Next, it to implement technology to mitigate the risk, such as multi-factor authentication.”

Fighting fire with fire, fighting AI with AI.

If AI can automate attacks at scale, and at high speed, it seems reasonable that opposing AI would offer an indispensable counter to such a threat.

Mona Ghadiri, Senior Director of Product Management at BlueVoyant, outlines the familiar offense-defense seesaw as it's likely to play out in artificial intelligence. “BlueVoyant research has found that AI is helping threat actors launch attacks faster and more efficiently. Using AI, they can put a phishing kit on autopilot and send out more messages faster. The threats remain, but the volume increases, making it harder to defend against.As we are seeing this week, fundamental changes are coming to cybersecurity due to the increasing use and promises of AI. While there is a lot of fear around how AI will enable cyber criminals, it is also making defenders stronger. AI can enable more rapid threat response while keeping costs down and helping with the shortage of talented cybersecurity professionals."

Industry isn't unaware of AI's potential, Ghadiri points out. “The most advanced cybersecurity vendors have been and will continue to use AI to support their clients and find threats faster, backed by human-led expertise. These companies will continue to be able to do more with their advanced technology, and use of AI and machine learning, to thwart cyber criminals. AI must also be closely monitored to make sure nefarious actors do not change the parameters, which could compromise security. That is why AI needs to be backed by human-led expertise to ensure it properly fills its role. As defenders use AI more, so will the bad actors. AI may enable cyber criminals to set up scams and phishing more quickly and enable those without a technical background to carry out attacks. The defenders must stay on guard for new AI-fueled attacks and this executive order is just the start to ensure implementation.”

Use AI to enhance security, but take care that it doesn't open new vulnerabilities.

Ashley Leonard, CEO, of Syxsense, agrees that this aspect of the EO hasn't received the attention it merits. “From our perspective as an automated vulnerability and endpoint management software developer, one action that hasn’t been widely covered in the mainstream media is the 'advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software," Leonard wrote. "It will be very interesting to see how this program is implemented and if those tools will be open source and voluntary or proprietary and government-mandated. Over the last 30 years, we’ve seen how code degrades over time. It’s why we have new vulnerabilities and bugs being released every day. It takes real resources – budget, time, and staff – for even the most advanced companies to keep up with vulnerabilities and bug fixes, so it makes sense to see if we could be using AI tools to find and fix these vulnerabilities. The other side of this directive, though, is whether AI can check AI. We are already seeing broad use of generative AI as part of the software development process. GenAI has enabled everyone to become a coder – but if you use AI to code, can you use it to test as well? How do software companies ensure the security of the code that’s being developed?"

One potential risk Leonard sees is the unnoticed growth of shadow AI. “This program has the possibility of growing into a bit of a beast. Just like we saw the massive growth of ‘shadow IT,’ we will absolutely see 'shadow AI' in use across organizations. Finance teams are using AI capabilities to generate new models faster and with more accuracy. Sales and marketing teams are already using AI to streamline several processes and tactics across their programs. And neither department needs to buy an AI tool to do so; the capability is being built into the tools they are already using – all they need to do is flip the switch to turn it on. By 2027, Gartner believes that 75% of employees will be acquiring IT outside of traditional IT buying processes, and this convergence of AI capabilities into traditional IT software is going to exponentially increase the adoption of AI. So if there’s a program being created to develop AI tools to find and fix vulnerabilities, how is it going to do that across the massive set of solutions in use across an organization?”

Ori Bendet, VP of Product Management at Checkmarx, approved of the EO. "This executive order is definitely a great start to provide everyone a perspective on some of the risks. It touches all the right domains: privacy, safety, security while also addressing the bigger picture. Historically, any major change in architecture, technology, and tooling has introduced new vulnerabilities and new threats from malicious actors. AI is no different. Specifically, as it relates to making sure software stays secure, it’s crucial for developers and AppSec teams to understand that generated code isn’t inherently safer than open-source code. Many code generation tools rely on open-source materials, which can have their own set of vulnerabilities. When added to the developer workflow, AI introduces potential new vectors for attackers to take advantage of. This is leading to new threats, particularly in the emerging field of software supply chain security." Bendet concluded, “Clearly, there are multiple roles here for AI to play. Now it will be up to all of us to ensure that we don’t only understand the great value of this disruptive technology, but also the risks that come with it."

But AI might struggle with the unknown unknowns.

Jeff Williams, co-founder and CTO at Contrast Security, offered a final caution on some of AI's limitations. “There are some ways that AI can help with cybersecurity, such as making it easier to understand and query large cyber data sets.  However, the White House is grasping at straws to think that AI will magically solve cybersecurity issues in the near term. For 'known' vulnerabilities and attacks, we have techniques with better assurance and capability than what AI can offer. For 'unknown' threats, AI is not a particularly good match, since we have no training data on zero-days. So why do we think that AI can help us here?”