Industry reacts to President Biden’s Executive Order on Artificial Intelligence.
N2K logoOct 31, 2023

Industry has reacted quickly to President Biden's Executive Order on artificial intelligence.

Industry reacts to President Biden’s Executive Order on Artificial Intelligence.

Overall reaction to the US Executive Order (EO) on artificial intelligence (AI) from industry has tended to be positive, but some expressed reservations, and a few outright skepticism. 

Positive approval of the Executive Order.

Jeff Williams, co-founder and CTO at Contrast Security, is among those who expressed approval. He’s pleased to see the government acting proactively. “I’m impressed that the White House has stepped in relatively quickly to address AI threats. Historically, the reaction from government has been too weak and years too late to make a difference.

Ori Bendet, Checkmarx VP of Product Management, wrote, "This executive order is definitely a great start to provide everyone a perspective on some of the risks. It touches all the right domains: privacy, safety, security while also addressing the bigger picture. Historically, any major change in architecture, technology, and tooling has introduced new vulnerabilities and new threats from malicious actors. AI is no different.

Arti Raman, CEO and Founder of Portal26, sees the EO as timely, coming as it does at an inflection point of both opportunity and risk. “The New AI Executive Order from President Biden is timely as AI usage continues to boom, improving accessibility, versatility, and productivity. And with this boom comes increased risk.”

Understanding the EO in the light of organizational responsibilities.

Raman divides the risk broadly into model risk and usage risk. “Model risk refers to the risk of errors, biases, insufficient data, harmful output, and loss of regulatory compliance. Usage risk refers to the risk of inappropriate usage, loss of data, loss of privacy, loss of intellectual property, unmanaged productivity, and overall loss of visibility. As companies look to continue investing in AI, they must also invest in the necessary tools to build responsible and competitive AI programs that meet the new standards for AI safety and security while preventing both model and usage risk. This investment will allow businesses to promote innovation through AI usage while keeping employees and customers safe by protecting their privacy, while still allowing them to benefit from AI, and allowing the company to comply with the executive order.”  

Organizations should look to their users, thinks Stuart Wells, CTO of Jumio. “President Biden's executive order on AI is a timely and critical step,” Wells writes, “as AI-enabled fraud becomes increasingly harder to detect. This poses a serious threat to individuals and organizations alike, as fraudsters can use AI to deceive people into revealing sensitive information or taking actions that could harm them financially or otherwise.”

And he sees a proper understanding of AI’s potential as framing an imperative for organizations. “In light of this growing threat, organizations must elevate the protection of their users. This can be accomplished by developing and implementing standards and best practices for detecting AI-generated content and authenticating official content and user identities. Which can be done through tactics such as deploying biometrics-based authentication methods, including fingerprint or facial recognition, and conducting continuous content authenticity checks.” He concludes with a call to action. “Organizations must act now to protect their users from increasingly sophisticated AI-enabled fraud and deception methods. Enhancing identity verification tactics is essential to mitigate this risk.”

John Stringer, Head of Product at Next DLP, sees the EO as a foundation on which the public and private sectors can find common ground. “CISOs are currently grappling with the proliferation of generative AI tools, worrying about how best to manage and control usage and the risk of data use and loss. The Biden Administration’s latest executive order is the first step the U.S. government has taken to put forward a set of principles upon which both public and private sector organizations can work from. 

“AI has massive potential, but also massive risk for the cybersecurity industry. On the one hand, AI gives security teams a productivity lift by analyzing and reporting high-risk activity quickly and at scale; a key business need given heightened risk of insider threats and data loss. However, malicious actors are also investigating how to use AI for nefarious purposes. Ultimately, the Executive Order will encourage organizations to safely reap the benefits of AI.” 

Tilting away from risk and toward opportunity.

Hitesh Sheth, President and CEO of Vectra AI also expressed general approval of the EO. “President Biden’s new executive order on artificial intelligence is a positive step toward more concrete regulation to curb AI’s risk and harness its benefits,” Sheth wrote. “However, it will be important for all global governments to strike the right balance of regulation and innovation. On the positive side, the White House is smart to align with existing National Institute of Standards and Technology (NIST) standards for AI red-teaming - or stress-testing the defenses and potential problems within systems. With a continually evolving threat landscape, it is essential for organizations to embrace a more holistic, proactive security paradigm and NIST’s standards around red-teaming support this approach. 

Is there enough on IP protection in the EO?

Contrast Security's Williams thinks the Executive Order missed an opportunity to address protection of intellectual property. “I’m very surprised that there is no section here on protecting the rights of authors and creators,” he wrote. “AI is already being trained with copyrighted data taken from the Internet without permission.  There is widespread concern about actors and others whose appearance and voice are being used in ways that were impossible to predict when they signed their contracts.  It’s vital to a thriving economy that these creators are properly incentivized to innovate and create new works.

So far so good, but really, it’s too early to tell.

Andrea Carcano, co-founder and CPO of OT cybersecurity leader Nozomi Networks, wrote, “Today's executive order shows a commitment to addressing the privacy, security and workforce concerns around AI – but it’s too early to tell how effective it will be until some of these measures are developed and implemented. In general, everyone can agree with the need to put structure around how and where AI is leveraged to protect our privacy and safety, while not stifling innovation or creating competitive advantages for our nation state adversaries.”

In some respects AI is too poorly understood for most people to make an informed assessment of recently enunciated US policy concerning the technology. Ira Winkler, CISO, CYE, thinks the EO in some respects addresses the AI sizzle and not the AI steak. “When I look at the EO, it looks like it addresses the hype of AI, and not a lot of the reality of the common use of AI,” Winkeer writes. “At one level, it is probably good to get ahead of the sociological issues. However, there is much more use of AI and has been that people don’t realize. For example, Alexa and Siri are essentially AI. The predictive text on cell phones is AI. There are a great deal of decision support tools in business that are based on AI algorithms. Likewise, there are so many software tools, from cybersecurity to marketing to building maintenance that are AI, but likely not subject to the EO.”

Look beyond the hype to the realities that must be addressed.

And Winkler pointed out that at bottom most of what’s being called “AI” is not as novel as it’s often represented as being. “For the most part, AI is really mathematical formulas that are in common use and have been for decades. This EO could be important where lives are at stake, where there can be obvious direct and immediate impact to individuals, etc. However, there are a lot of common AIs that are not going to get the attention they might need.”

There’s also a risk of a top-down order stifling innovation. “I also have concerns that there might be a penalty for innovation,” Winkler said. “AIs that are inaccurate are that way typically not because of an inherent flaw in the system, but because of incomplete training. One of the key qualities of AI is that it is self learning and is designed to be improved with new data. For example, some medical AIs have been accused of being racially biased. It is a sensitive area, but for example, there is an assumption that all human bodies are alike and to claim that bodies of different races may have different issues can raise claims of racism. At the same time, people of different races are predisposed to different conditions. If an AI fails to take this into account, from a technical perspective the AI just needs to be better trained. This implies that potentially useful Ais could be killed before they are trained based upon early results and incomplete training.

He hopes, however, that reality will trump hype. “Hopefully though, cybersecurity concerns will be addressed not just to satisfy the basic hype, but will also address the security of the data, the computers on which the AI runs, the integrity on the incoming data, prevent manipulation of the AI model, and all of the other aspects that need to be addressed to properly secure AI systems.”

Richard Bird, Chief Security Officer, Traceable AI, characterizes the EO as more hope than method. “The recent executive order on artificial intelligence is aspirational at best. It enumerates a laundry list of digital privacy rights that the US government has already shown its inability to protect. If the US government was unable to protect those rights or even legislate national data privacy standards before AI became an issue, why would we expect these guidelines and standards to deliver better results than what we’ve seen in the OPM hack, PPP loan fraud or IRS fraudulent refund processing? 

“The Executive Order on safe, secure, and trustworthy AI assumes a level of visibility into the actions and activities within the private sector that it simply does not have. The opportunistic gold rush that is AI has already yielded questionable results in terms of our privacy and ethical dilemmas related to the use and application in everything from citizen monitoring to data security. The private sector is already showing zero interest and restraint in the responsible development of AI. This executive order will not change that.” 

Next steps for AI.

Tyler Shields, VP Product Marketing, Traceable AI, offered an evaluation and some speculation about next steps. “The Trustworthy AI EO is a massive and sweeping order intended to impart significant change and oversight in the AI space. It covers major areas of concern, including standards for safety, governing bodies, privacy implications, algorithmic impacts on civil rights and discrimination, worker displacement, monopoly and competition, and governmental use domestically and abroad. Regarding the impact on the IT and Software Industries, the most significant component is the formation of entities to ensure that AI safety assertions are adequately conducted and reported to the government appropriately. In addition to these base requirements, the order requires red teaming, AI watermarking, and the development of a cybersecurity AI program to improve network and software security. This isn't the last of what is to come. There is a specific call out in this EO for the US Government to create a National Security Memorandum to add additional requirements and specificity around security in AI. I would expect additional output soon that is more grounded in technical requirements and has designated punishments and impacts for failure to comply. This is just the first foray into guidelines and guardrails for safety and security in AI innovation.” 

And remember, AI is a tool. Attackers use it (and they won't read the EO the way you do).

It’s worth remembering that threat actors aren’t going to be deterred by an Executive Order. Dror Liwer, co-founder of Coro, wrote, “Our concern is not with corporations adopting safe and ethical AI, but rather bad actors both private and state sponsored who are not bound by any executive order. We need to prepare for the asymmetrical battle where corporations are bound by regulatory requirements, while the adversaries are using that to their advantage.”