Red-teaming and standards development in President Biden's AI Executive Order.
N2K logoOct 31, 2023

Red-teaming is expected to play an important role in AI standards development.

Red-teaming and standards development in President Biden's AI Executive Order.

The White House Fact Sheet announcing President Biden’s Executive Order (EO) on artificial intelligence (AI) contained some clear direction on standards development in general and red-teaming in particular. 

Red-teams are mentioned early in the EO.

The Fact Sheet specifies that the Administration will:

  • “Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,” and 
  • “Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.”

Red-teaming will play a significant role in standards development.

Contrast Security’s Jeff Williams, co-founder and CTO, is interested to see how the expected standards will emerge. “I’m encouraged to see requirements for disclosure of models and red-team safety tests to government. However, it’s not clear that the public will have access to this information,” he wrote. “I can’t wait to see the “rigorous standards” for red-team safety testing. I think this will be a very difficult challenge. How do you create tests that ensure that AI is safe? I hope that they involve industry, including the 500+ team from OWASP that have created the OWASP LLM App Top Ten Security Risks and other research.”

Michiel Prins, Co-founder and Head of Security Advisory Services at HackerOne, offered an extended consideration of how red-teaming will influence standards for, and deployment of, AI systems. 

“From a cybersecurity perspective, we’re still very much in the early days of understanding the risks and benefits of AI. Our digital world is more connected than ever, and the security risks of AI will take a collective effort from the cybersecurity community and the private and public sectors to contain. 

“We commend the Biden Administration for recognizing the role the ethical hacker community can play in these efforts, as demonstrated through the Executive Order’s endorsement of red-teaming. The AI Village at Def Con proved the value of these kinds of engagements; pressure testing AI through real-world scenarios to find vulnerabilities is the fastest way to significantly improve the security of these models.  

“Ethical hackers have also made it clear they are ready to meet this call to secure AI: 55% of hackers on the HackerOne platform predict that Gen AI tools themselves will become a major target for them in the coming years, while 62% plan to specialize in the OWASP Top Ten for LLMs. At HackerOne, we’re already working with some customers, including AI companies, to explore how red-teaming and the creativity of hackers can improve AI security and safety.  

“I’m also encouraged by the Executive Order’s emphasis on building AI tools to find and fix vulnerabilities in critical software. More than half of ethical hackers on our platform plan to use Gen AI tools to identify vulnerabilities faster, and we can only assume bad actors are doing the same. We must meet this new threat landscape with matched investment in innovation.” 

Standards development and the importance of testing.

Andrew Costis, Chapter Lead of the Adversary Research Team at AttackIQ, thinks the National Institute of Standards and Technology (NIST) is the right choice to lead AI standards development. “Expanding NIST standards to include the field of AI safety is a step in the right direction as AI evolution and adoption has been increasing much more over recent years. NIST has always been a great first step for many organizations wanting to mature their cybersecurity program. Evaluating control families using NIST is a path chosen by many. AttackIQ has mapped 18 control families using NIST 800-53 as well as mapped each to MITRE ATT&CK, enabling exercising those controls much easier and using an industry-recognized taxonomy.

Standards need to be informed by testing. “Understanding the attack surface of the AI technology is important, particularly in terms of testing known adversary behaviors,” Costis said. “AI technologies will still be susceptible to certain attacks such as data exfiltration, supply chain attacks, and data manipulation (to name a few). There already exists today a number of ways to safely emulate these known adversary behaviors in order to better plan and prepare for possible AI attacks in the future.”

A comprehensive approach to standards development.

Michael Covington, VP of Portfolio Strategy at Jamf, was struck by the scope and scale of the approach to standards the EO envisions:

"The recently signed Executive Order focused on establishing new standards for AI use is one of the most comprehensive approaches to governing this new technology that we’ve seen to date.

“As much as we may want to encourage organic and unconstrained innovation, it is imperative that some guardrails be established to ensure developers are mindful of any downstream effects, and that regulators are in place to help monitor for potential damages so they can be addressed before spiraling out of control.

“As we have seen with recent AI developments, the technology is moving at a rapid pace and has already made an impact on society, with diverse applications across industries and regions. While there have been calls for regulations to govern the development of AI technologies, most have been focused on preserving end user privacy and maintaining the accuracy and reliability of information coming from AI-based systems.

“President Biden’s executive action is broad based and takes a long-term perspective, with considerations for security and privacy, as well as equity and civil rights, consumer protections, and labor market monitoring. The intention is valid - ensuring AI is developed and used responsibly. But the execution must be balanced to avoid regulatory paralysis. Efficient and nimble regulatory processes will be needed to truly realize the benefits of comprehensive AI governance.

“I am optimistic that this holistic approach to governing the use of AI will lead to not only safer and more secure systems, but will favor those that have a more positive and sustainable impact on society as a whole."  

The EO demonstrates that AI has become a US national priority.

Randy Lariar, Practice Director of Big Data, AI and Analytics at Optiv, thinks that whatever else it accomplishes, the EO shows that AI is now a US national priority, and that appropriate standards will follow:

“The Biden administration’s Executive Order on AI safety touches on a variety of areas where the Federal Government can play an important role in setting up guardrails. Most notably to me, it increases the focus on the National Institute of Standards and Technology (NIST) to complement its existing AI Risk Management Framework by developing safety standards for AI, including "tools and tests to help ensure that AI systems are safe, secure, and trustworthy."

“This is no small task, and varying degrees of trust are appropriate considering the associated risk of the task which AI is conducting. I think that the development of these standards along with the surge in investment and resources for securing AI will be the most impactful part of today's news.

“The biggest takeaway, though, is that the EO demonstrates that AI is a national priority – not an issue limited to the big tech companies. It has a broad impact, affecting consumers, students, small businesses and many other interest groups. It’s clear the administration has consulted with experts across the public and private sector to establish a very strong plan to achieve AI safety and security, and today’s development should be seen as a step in the right direction.”