President Biden's Executive Order on artificial intelligence addresses a broad range issues.
The US Executive Order on artificial intelligence is out.
US President Biden this morning issued an executive order (EO) on artificial intelligence (AI).
The EO is both protective and promotional.
Initially available to the public in the form of a White House Fact Sheet, the EO "establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more." The closing "and more" is seriously intended. The EO is complex and far-ranging, touching on both the risks and opportunities the family of emerging technologies presents.
Cybersecurity aspects of the EO.
Many of the provisions of the EO have little to do directly with cybersecurity proper, but those that do include:
- "New Standards for AI Safety and Security." The EO will apply the Defense Protection Act to require that development and subsequent training of "any foundation model that poses a serious risk to national security, national economic security, or national public health and safety" must be reported to the federal government. Such reporting must include "the results of all red-team safety tests." These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. The National Institute of Standards and Technology (NIST) will establish "rigorous standards for extensive red-team testing to ensure safety before public release." The Department of Homeland Security (DHS) will establish an AI Safety and Security Board to ensure compliance. DHS will also work with the Department of Energy to address AI-based threats to critical infrastructure. The Department of Commerce will develop guidance for content authentication (the EO specifically mentions "watermarking") to ensure the AI-generated content is clearly recognizable as such. The National Security Council will lead preparation of a National Security Memorandum to "ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI." Some of the aspirations in this section are positive rather than preventive. The EO promises a cybersecurity program to develop AI tools that can find and fix software vulnerabilities
- "Protecting Americans’ Privacy." The EO promises a range of measures designed to develop technologies that can protect individuals' privacy. New cryptographic tools are specifically mentioned. Here too the provisions are both positive and preventive, seeking not only to protect data from AI-enabled snooping, but to use AI in ways that would enhance privacy.
- "Ensuring Responsible and Effective Government Use of AI." The EO promises "guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment."
A complex EO addressing many distinct issues.
The EO in general has the sort of broad-ranging complexity one sees in Presidential state-of-the-union address, with distinct constituencies in mind. Other sections of the EO focus on ensuring competition, preserving and creating jobs, avoiding certain civil rights risks (particularly in employment and housing), and supporting AI research and development. The White House Fact Sheet emphasizes the degree to which international consultation shaped the EO, and the list of partners is long and instructive: Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. (Notably absent are China and Russia.) The UK is hosting a much-anticipated AI summit this week, and the United Nations has announced the formation of an AI governance advisory committee.
Some industry experts see the EO as proactive.
As Chris Wysopal, CTO and co-founder of Veracode, said in comments emailed this morning, AI-delivered threats have so far not shown up, at least not in a big way. “We haven’t seen the malicious, widespread use of AI yet – and hopefully we won’t – but we all know theoretically that it’s possible. People are hesitant, and even scared, of a future where this technology is completely unregulated. That is why the government acted to rein it in with the upcoming Executive Order," Westphal wrote. "This proactive approach is radically different from how the government has regulated new technologies in the past, and for good reason. The rapid evolution of AI makes it hard to predict the outcome of the cyber arms race between defenders and attackers. The same “wait and see” strategy that the government took to regulate the internet and social media is not going to work here."
And Wysopal sees, with the EO, both challenge and opportunity. "In an era of rapid AI advancement, transparency of training data, visibility of how to mitigate the security risk from AI attacks, and understanding of the reliability and safety of the AI output are necessary regulatory measures to ensure the responsible development and deployment of AI technologies. We should embrace this technology, but we need to do it safely. It’s a challenging problem, and just passing regulation is not going to solve it, but today is a start. As more private companies continue to launch AI products, the feedback they receive from their customers is critical to shaping future AI regulations that foster innovation and address societal concerns. Collaboration between the tech industry and the government is key for instilling a secure space for innovation and safety to thrive.”
(Added, 1:00 PM ET, October 30th, 2023.) Wysopal sent some additional comments in late this morning. He's struck by how extensive the EO turned out to be, especially given how immature the technology is. “I can’t remember the government developing such a sweeping EO on the safety of any emerging technology in the past. There have been EOs concerning nuclear materials and technology and cryptography but they have been focused on export controls. The AI EO seeks to make domestic use of AI safe, secure and interestingly, trustworthy. The order seeks to prevent AI being developed for fraud, privacy violations and to be free from bias. That is a tall order."
Wysopal particularly approved of the EO's promise to foster the development of AI for cybersecurity. "One of the benefits the EO pursues is one of the biggest challenges of our time which is cybersecurity by tackling the risks due to insecure software. By incentivizing the development of AI that can find and fix vulnerabilities in software the EO seeks one of the huge benefits of generative AI which is to understand and write software code.
And Wysopal offered some inside perspective on the Administration's engagement with the private sector in the formulation of this aspect of the EO. "I met with the ONCD back on June to discuss the challenges and benefits to cybersecurity that AI would bring. I stressed the benefit of shifting left the benefits of AI all the way to the software creation process where vulnerabilities can be eliminated at their source. Also, this is not just finding and fixing new vulnerabilities in current software development but going back and tackling the security debt of legacy products that were built in the ‘90s and 2000s which didn’t get practically any security built into the development process. I also stressed the importance of prioritizing automating fixing over more finding because developers are already not fixing what can be found with available tools. This is due to time and resources but also training to be able to understand how to implement a fix. Automated fixing solves all of these problems at once. With far less vulnerabilities making it out of the development process into production code, the cost to attackers skyrockets as there are less points of vulnerability and security teams can more easily defend a smaller attack surface.”
On the other hand, generative AI is already out and about.
Maybe not in a big way, but the effects of AI are already beginning to be felt, especially in social engineering. John Gunn, CEO of Token, cautions, "The aim is noble and the need is certain, but the implementation will be challenging considering that Generative AI technology is already being used extensively by hackers and enemy states to attack US companies with phishing emails that are nearly impossible to detect. Most AI technologies that deliver benefits can also be used for harm, so almost every company developing AI solutions needs to make the required disclosure today."
A focus on responsible use, by both government and the private sector.
Andre Durand, Founder and CEO, Ping Identity, commented on the EO's emphasis on responsible use of the new technology. “Today’s executive order (EO) represents the first White House driven policy tied to AI regulation, and is a substantial step towards establishing more guidelines around the responsible use and development of AI. While the impact of AI on society has been profound for decades and will continue to persist, the EO aims to ensure a more secure and conscientious AI landscape. Safeguarding against its misuse and enforcing balanced regulation, means that we can embrace the benefits and future of trustworthy AI," Durand wrote, adding, "The EO also acknowledges that AI heavily relies on a constant flow of data, including user and device information, some of which may be sent to entities outside the U.S., making the need for stronger requirements around identity verification even more necessary. As criminals find novel ways to use AI, we can fight fire with fire and use AI - in responsible ways - to thwart their efforts. Organizations who adopt AI-driven solutions have the power to detect anomalies, enemy bots and prevent fraud at massive scale. Identity verification will also play a major role in stopping attacks going forward, so stronger requirements around identity proofing, authentication and federation will be necessary. As we continue to see further regulations emerge, the private sector must also take part in the effort and collaborate with public stakeholders to achieve more responsible AI worldwide.”
But some perceive a coming "regulatory cage."
That's the viewpoint the R Street Institute published this morning, Adam Thierer, a senior fellow for the R Street Institute's technology and innovation team and AI expert, sees an incipient "regulatory cage." He summarized five key points from his assessment:
- "While some will appreciate the whole-of-government approach to AI required by the order, if taken too far, unilateral and heavy-handed administrative meddling in AI markets could undermine America’s global competitiveness and even the nation’s geopolitical security. AI is a critical new technology with the potential to fundamentally expand productivity and economic growth, with benefits accruing across many sectors and for all consumers. AI has particularly important implications for advancing public health. AI and computational science also have national security ramifications, which is why a strong and secure domestic technology base is essential to countering challenges or threats from China and other nations. Excessive preemptive regulation of AI systems could impede the growth of these technologies or limit their potential in various ways."
- "The new executive order highlights how the administration is adopting an everything-and-the-kitchen-sink approach to AI policy that is, at once, extremely ambitious and potentially over-zealous. The implementation details on all the matters here are mostly left to the various federal agencies to work out, and it remains unclear how far they can stretch their statutory authority to enforce many of these stipulations. Even so, taken together with other recent administration statements, the order represents a potential sea change in the nation’s approach to digital technology markets as federal policymakers appear ready to shun the open innovation model that made American firms global leaders in almost all computing and digital technology sectors."
- "There are some positive and much-needed elements to the EO, however, including its call 'to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.' For some time, there has been a pressing need to expand efforts to retain skilled immigrant workers, with many technology companies and experts worried about losing top-notch talent to other nations. But most of the order focuses on broader and extremely amorphous calls for expanded government oversight across many other issues and agencies, raising the risk of a 'death by a thousand cuts' scenario for AI policy in the US."
- "Of greater concern is the executive order’s green light for the Federal Trade Commission (FTC) to expand its focus on AI policy. While the FTC does possess broad powers to police unfair and deceptive practices for all markets, the danger of preemptive overreach exists with the EO’s call for the FTC to exercise greater regulatory authority over the AI ecosystem in particular."
- "With the administration’s recent actions, one can’t help but worry that the Biden administration is looking to follow in the E.U.’s footsteps on AI policy with more comprehensive controls on computation and meddling in digital tech markets. There is still time to pursue a more enlightened path. To balance innovation and safety, AI governance must be focused on flexible, collaborative, iterative, bottom-up governance solutions through risk-based policies that are focused on system outcomes, not on system inputs or design."
Other aspects of the EO, looking at AI's potential in cyber-adjacent areas.
(Added, 1:15 PM ET, October 30th 2023.) Jake Williams, currently a faculty member at IANS Research, a Boston-based cybersecurity research and advisory firm, commented on the relevance of the EO to privacy, healthcare and biosafety. “While it is significant that the Biden AI Executive Order (EO) regulates foundation models, most organizations won't be training foundation models.This provision is meant to protect society at large and will have minimal direct impact to most organizations," he wrote, adding, "The EO places emphasis on detection of AI generated content and creating measures to ensure the authenticity of content. While this will likely appease many in government who are profoundly concerned about deepfake content, as a practical matter, generation technologies will always outpace those used for detection. Furthermore, many AI detection systems would require levels of privacy intrusion that most would find unacceptable."
He thinks the order's specific mentions of the importance of regulating AI as it might be applied to biosynthesis is important. "The risk of using generative AI for biological material synthesis is very real. Early ChatGPT boosters were quick to note the possibility of using the tool for "brainstorming" new drug compounds—as if this could replace pharmaceutical researchers (or imply that they weren't already using more specialized AI tools). The impact of using generative AI for synthesizing new biological mutations, without any understanding of the impacts, is a real risk and it's great to see federal funding being tied to the newly proposed AI safety standards."
And research into privacy preservation is likely to be among the most significant policy steps the EO may foster. "Perhaps the most significant contribution of the EO is dedicating funding for research into privacy preserving technologies with AI. The emphasis on privacy and civil rights in AI use permeates the EO. At a societal level, the largest near-term risk of AI technologies is how they are used and what tasks they are entrusted with. The Biden EO makes it clear: privacy, equity, and civil rights in AI will be regulated. In the startup world of "move fast and break things", where technology often outpaces regulation, this EO sends a clear message on the areas startups should expect more regulation in the AI space.”
Industry comment on the dual aspect of the EO: opportunities and risks.
(Added, 2:15 PM ET, October 30th, 2023.) Shreyans Mehta, Founder & CTO, Cequence Security, offered an appraisal of what the EO could mean for both preventing harm and achieving benefits from AI:
"It is evident to me that while AI holds immense promise for augmenting our national defenses and economic prowess, it's equally a double-edged sword. We can use AI to enhance our productivity but the same holds for adversaries as well. AI is a 'dual-use technology,' with the potential to usher humanity forward or, if mismanaged, regress our advancements or even push us towards potential extinction.
"APIs, which drive the integrations between systems, software, and data points, are pivotal in realizing the potential of AI in a secure protected way. Their role is even more pronounced when we think of AI's application in cyber defenses. AI can sift through massive amounts of data, identify vulnerabilities, detect phishing attempts, and even discern patterns that may elude human analysts. Yet, on the flip side, in the hands of adversaries, AI can potentiate cyber threats, automate intrusion attempts, and swiftly adapt to evolving security measures.
"The executive order's emphasis on leveraging the government's purchasing power to shape AI practices in the industry resonates deeply with the cybersecurity community. It's not merely about the acquisition of AI tools; it's the responsibility and accountability of secure integration, especially when facilitated through APIs. Secure data sharing across platforms becomes the linchpin in ensuring AI-driven cyber defenses are robust and resilient.
"The order's inclination to put constraints on China's AI development, primarily through U.S. cloud providers, is a prudent move to ensure that technological advancements are not weaponized against democratic values. Similarly, the drive to streamline AI talent recruitment will be pivotal in sustaining U.S. technological leadership.
"We welcome the government's involvement in this. We are in the early phases of AI innovation and it is important to bring in Government regulations for the responsible development of this technology. Unregulated development can lead to a significant impact on the security and privacy of the public, sometimes even without their knowledge. Also, AI hallucinations have the potential to spread mass disinformation. Government involvement will put guardrails around accountability and transparency of the generated content."