The Executive Order argues that global challenges require a global response.
The US Executive Order and international cooperation on AI standards.
The White House Fact Sheet emphasizes the degree to which international consultation shaped the EO, and the list of partners is long and instructive: Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. (Notably absent are China and Russia.) The UK is hosting a much-anticipated AI summit this week, and the United Nations has announced the formation of an AI governance advisory committee.
The EO frames the challenge and promise of AI as a global one.
The White House Fact Sheet describes the international dimensions of the challenge. “AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions:
- “Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.
- “Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.
- “Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.”
Organizations that operate globally must think globally.
Corporations that work internationally must prepare for AI regulation wherever they operate. Wade Ellery, Field Chief Technology Officer of Radiant Logic, argued:
“This is a major building block for the United States as organizations who operate on both a national and global scale look for guidance and trust. AI has huge potential to help find and fix vulnerabilities which are already being exploited by malicious actors on an almost constant basis. Consider the routine process of User Access Reviews within an organization’s network, a task which is not only fundamental, but must be repeated for it to be effective. If this task is not done properly, due to a lack of automation or a heavy reliance on manual intervention, it could lead to an organization’s sensitive data not only being exposed but the organization falling behind on strict regulatory requirements.”
Striking a balance between innovation and transparency.
Hitesh Sheth, President and CEO of Vectra AI, cautioned that governments should take care that regulatory zeal not entangle and impede innovation. “As the U.S. government works with international partners to implement AI standards around the world, it will be important for these regulations to strike a balance between advocating for transparency and promoting continued innovation - rather than simply creating artificial guardrails. There’s no doubt that AI advancements and adoption have reached a state where regulation is required – however, governments need to be cognizant of not halting the groundbreaking innovation that’s taking place that can transform how we live for the better.”
Regulate by all means, but with care and clarity.
Anurag Gurtu, CPO at StrikeReady, sees us, collectively, at an inflection point with respect to AI. “As President Biden prepares to leverage emergency powers for AI risk mitigation,” Gurtu wrote, “it's a clear signal of the critical juncture at which we find ourselves in the evolution of AI technology. The administration’s decision reflects a growing awareness of the transformative impact AI has on every sector, and the need for robust frameworks that govern its ethical use and development.”
The task isn’t simply the preemption of harm, Gurtu argued. “This initiative isn’t just about preemptive measures against potential misuse; it's a foundational move towards establishing a global standard for AI that aligns with our values of safety, security, and trustworthiness. It’s an acknowledgment that while AI presents unparalleled opportunities for advancement, it also brings challenges that must be addressed to protect societal welfare and national interests.
Inevitably this will mean more regulation, and it’s important that such regulation be evolved with clarity and forethought. “For businesses and developers, this move will likely mean a more stringent regulatory environment, but also a clearer direction for innovation within safe and secure boundaries. It's time for all stakeholders to engage in dialogue and contribute to a balanced approach that fosters innovation while safeguarding against the risks that have kept policymakers and citizens alike vigilant.
The importance of achieving and maintaining a global perspective.
“This executive order is a monumental moment for the safe, secure and ethical development and use of AI,” Eduardo Azanza, CEO of Veridas wrote. “With Europe currently working on the EU AI Act, the US is looking to join the developing global precedent being established, which will determine how countries and organizations should approach AI. We will surely begin to see a cascading trend of similar legal actions across the globe.”
Azanza approved of the global perspective implied in the EO. “The White House has taken a global perspective necessary for implementing regulations that account for risks and benefits, security, privacy, innovation and non-discrimination. AI has a realm of opportunities, but like any new technology, it also presents risks. When harnessed responsibility, the use of AI can help address global issues and make the world more prosperous, innovative, productive and secure. However, when used irresponsibly, it can lead to societal harm, displace workers, stifle competition and pose risks to national security. It is paramount that we strike a balance between reaping the benefits of AI and mitigating its potential downsides.”