The Bletchley Declaration follows the US Executive Order on artificial intelligence.
N2K logoNov 2, 2023

A global challenge receives global attention.

The Bletchley Declaration follows the US Executive Order on artificial intelligence.

US President Joe Biden on Monday signed an executive order focused on the secure use of AI technology, signaling a much more aggressive approach to AI regulation. As the White House explains, the EO aims to make the US a leader in “seizing the promise and managing the risks of artificial intelligence” and “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” 

AI governance is accepted as an international challenge: a view from the UN.

The US wasn't alone in enunciating an attempt at governance of artificial intelligence. Last week the United Nations established an advisory body directed at the same challenge. Reuters reports that Secretary-General António Guterres explained the reason for the move. "The transformative potential of AI for good is difficult even to grasp," he said. "And without entering into a host of doomsday scenarios, it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion and threaten democracy itself.”

And a view from Bletchley Park.

This week British Prime Minister Rishi Sunak hosted an AI Safety Summit, convening about a hundred government leaders, tech executives, and scholars. The Summit is British-led but with broad international participation. The BBC explains that Prime Minister Sunak’s plan is to make the UK a global leader in AI safety, but the Summit reached consensus on a draft agreement, the Bletchley Declaration, which outlined two general directions for further work:

  • "[I]dentifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies."
  • [B]uilding respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

The signatories represent the world's major cyber powers, with the exception of Russia, Iran, and North Korea. The full list includes: Australia, Brazil, Canada, Chile, China, the European Union, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, the Kingdom of Saudi Arabia, Netherlands, Nigeria, the Philippines, the Republic of Korea, Rwanda, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, the United Kingdom, and the United States of America 

Implementing the US Executive Order.

AP News notes that the ambitious document offers guidelines on privacy, civil rights, consumer protections, scientific research, and worker rights. Politico reports that the guidelines prioritize the immigration of highly-skilled individuals with expertise in critical areas to the US, and also call for the creation of new government offices and task forces focused on harnessing the powers of AI for uses in areas like healthcare, housing, trade, and education. Simultaneously, President Biden directs federal agencies to set standards to ensure data privacy and cybersecurity of AI tech, as well as prevent discrimination and monitor competition in the AI industry. 

Furthermore, the EO invokes the emergency federal powers of the Defense Production Act (created during the Korean War), which would require major AI companies to notify the government when developing any system that poses a “serious risk to national security, national economic security or national public health and safety.” The New York Times adds that the US president is calling for the developers of the most advanced AI products to submit test results to the government ensuring the tech cannot be used to manufacture biological or nuclear weapons. 

The EO laid out specific tasks for Executive Departments and agencies, and the Commerce Department has already taken steps toward realizing its primary assignment under the EO. "[T]he U.S. Department of Commerce, through the National Institute of Standards and Technology (NIST), will establish the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models. USAISI will support the responsibilities assigned to the Department of Commerce under the historic executive order that President Biden signed earlier this week."

Reaction to the EO continues to come in.

Some members of the tech industry worry the order is overstepping when it comes to government oversight and could hamper innovation. Trade group NetChoice says the order is “dangerous for our global standing as the leading technological innovators” and is “ripe for legal action.” But in general, the order received a positive response from tech interest groups, cybersecurity experts, and Democratic lawmakers. (Republicans, the Wall Street Journal notes, largely declined to comment.) Senate Majority Leader Chuck Schumer, a Democrat from New York, says a bipartisan group of lawmakers will meet with President Biden at the White House this week to discuss possible legislation, and the guidance within the order is to be implemented over the course of ninety days to one year. 

Jackie McGuire, Senior Security Strategist at Cribl, offered some reflections on the aspirations and unanswered questions surrounding the EO:

"The Biden administration issued a sweeping executive order on artificial intelligence this week, aiming to address some of the most pressing and frequently cited concerns around the increasing ease with which large amounts of data can be used for unintended purposes. This is the latest in a string of increasingly tech-literate policies issued by this administration and was far more comprehensive and ethics-focused than I anticipated. I particularly appreciate the nod toward rooting out bias in AI.

"As with many executive orders, this one raises as many questions as it attempts to answer. While many of the goals and directives concerning consumer protection, generative data labeling, ensuring the riskiest types of AI are reviewed and have guardrails, and accelerating AI training are all necessary steps in the right direction, one has to question whether the government and its agencies, in their current capacities, can enforce all of these mandates, either from a technical knowledge standpoint, or sheer manpower. The ease with which AI programs can be built means that they are proliferating at a rate that it seems difficult to keep pace with, so finding enough skilled professionals to provide oversight, review, and approval seems a daunting task.

"Additionally, my conversations concerning our national security posture with security professionals inside and outside the Federal government always center around one common issue: You can’t make long-term, strategic decisions one two-year congressional term or four-year presidential term at a time. The business of AI and technology policy making needs to be removed from the volatile, cyclical nature of our legislative processes to be effective. CISA, NIST and other federal agencies can provide some stability here, but only if given the authority to do so, independent of the current administration or legislature."

Tim Malcom-Vetter, EVP of strategy at NetSPI, is struck by the speed with which the new technology is being adopted. “There has never been faster adoption of any technology than what we’ve seen with Generative AI, ML, and LLMs over the past year. A prime example of such rapid adoption and disruption is the public letter by Satya Nadella, CEO of Microsoft, where it was announced that all Microsoft products are or soon will be co-pilot enabled – this is just the starting point," he wrote in emailed comments. "The most recent AI Executive Order demonstrates the Biden administration wants to get ahead of this very disruptive technology for its use in the public sector and desires to protect the private sector by requiring all major technology players with widespread AI implementations to perform adversarial ML testing. The order also mandates NIST to define AI testing requirements, which is critical because no one can yet say with confidence that we, as a tech industry, exhaustively know all the ways these new AI implementations can be abused.”

Sanjay Poonen, CEO and President of Cohesity, sees three important directions as the world moves forward on AI governance:

“With the White House’s executive order on AI, we’re seeing a sharp move in the right direction on creating new standards for AI safety and security, preventing cyber vulnerabilities with AI, and prioritizing privacy and the public good. However, this is just the beginning, and it will take a village of public-sector, business, and non-profit organizations to make these ambitious and vital ideas a reality. Here are a few ways we can do that, inspired by the executive order:

  • "Develop an open-source, shared standard for AI model testing: When foundational models are more open, and the journey that the data it’s based on goes through becomes clearer, they become more predictable. This will ensure that businesses and industries can build reliable, secure solutions on top of the models, ensure AI tooling stays up-to-date and accurate, and go a long way toward building accountability and earning the public’s trust in the output of generative AI. As NIST begins to build out its guidelines for AI safety, this should be a crucial component.
  • "Advise our public-sector counterparts on developing data privacy hygiene standards for AI: It’s great that the White House recognizes the importance of protecting people’s privacy and promoting privacy-enhancing technologies as AI use increases. However, the White House stopped short of providing specific mandates, directing federal agencies to offer industry-focused guidance here. There will be situations where leveraging sensitive data will be beneficial, such as in healthcare - and in these cases, we need to put safeguards in place within foundational models themselves to make sure that data is redacted in a way that protects consumers. Without a federal data privacy law, the security industry must come together and draw on its experience to help promote “privacy by design” in these models.
  • "Bridge the cyber knowledge/jobs gap without cutting out humans: The American people and corporations are already outnumbered by cyber threats, and there’s not enough human power to combat them. Generative AI has enormous potential for advanced cyber hunting, threat detection, and fixing vulnerable software because of how quickly it can sort through patterns and identify anomalies. The White House has made substantial progress by directing CISA, DHS, and other agencies to find opportunities to use AI to improve our critical infrastructure and make efforts to strengthen the AI jobs pipeline. Cyber experts and vendors should leverage this guidance to update our AI reskilling and training for the cybersecurity industry in a way that empowers employees to do more, reduce their time on busywork, and be prepared for new risks.”

Swimlane's co-founder and Chief Strategy Officer, Cody Cornell, warns of the dangers of anthropomorphized AI:

"The entire idea of AI as a human being is terrifying because now you can't trust what you see. Before, it used to be the case that people with cell phones were this reporting network that allowed us to see what was going on and allow us to make assessments of what was happening in the world based on video and audio. That’s not the case anymore, and that's terrifying as a voter and a citizen since you don't know what's true unless you physically witness it yourself. 

"So where do you draw the line? As a country, we’re known for innovation, especially Silicon Valley. This is what we do, and it drives our economy. Regulating AI needs to be a balancing act as we need to continue to be leading innovation, but we also don't want to be the reason why things go off the rails.

"Through this regulation, the government is trying to enforce rules for themselves to ultimately influence industry and have ripple effects out into the broader economy. It will be interesting to see where this goes as legislating AI is really hard since it's changing so rapidly. I think we will end up with an FDA-style process where the rules and the standards change as the technology evolves.

"This executive order will also be the catalyst for changing how we think about H1B visas. I've always thought that the US should be like a college football coach, scouting the world for the greatest talent and then doing everything to convince them to play for our team. It's exciting that there's a change in perception of how we should be doing this, and I think this will be a huge opportunity to do something positive."

Industry would be well-advised to RTWT, to give the Executive Order a close reading. That's the view of Approov VP George McGregor. "If you market a cybersecurity solution in the USA, you had better read through this Executive Order (EO) - it may affect your business! If your solution is deterministic in nature, then life will be easier, but if you are promoting the use of AI in your product, then life may well get more complicated: Not only do you need to demonstrate to customers that false-positives and management overhead due to AI are not an issue, but with these new guidelines, the AI methods you employ will be under the microscope also." He added some specific points he thinks worthy of attention:

  • "First - if you are an AI based cybersecurity vendor, you may be expected to share your test results with the government. The success or failure of a security solution, by its very nature, "poses a risk to national security".
  • "Second, attestation techniques will become critical - this is already true for mobile app code which can easily be reverse-engineered and replicated unless steps are taken. Fingerprinting techniques used in mobile may be applicable here.
  • "A program to use AI to eliminate vulnerabilities is a very noble pursuit but should not be viewed as a replacement for good software development discipline and implementing run time visibility and protection.
  • "The use of AI will not only be a power for good. The hackers will seek to use these techniques also and there will inevitably be an arms-race between security teams and hackers. To start with however, the cost of entry for bad actors will be high, in terms of knowledge required and complexity of the task, and this will mean that only well funded "nation state" teams will be the primary users of AI for nefarious purposes.  National Security teams will need to have the resources to track and counter these efforts."

(Added, 7:45 PM ET, November 2nd, 2023.) Jon Siegler, CPO at LogicGate, concurs with the industry view of the EO as addressing both risk and opportunity. “As President Biden's Executive Order highlights, the growth of artificial intelligence presents both opportunities and challenges. This balanced outlook recognizes the potential of AI for speed and innovation, while remaining cognizant of the associated risks to security and privacy by those who would seek to misuse it. It is encouraging to see the government taking such robust action to advance AI safely and ethically, promoting research and development, fostering a diverse and skilled AI workforce, and addressing ethical and security considerations.”

(Added, 8:00 PM ET, November 3rd, 2023.) Dr. Ryan Ries, Data & ML Practice Lead at Mission Cloud, thinks the Executive Order is unlikely to have much effect on malicious use of AI. “I think that it is important to look at this and appreciate that it is a bit late to the game. Most people are uncertain when we would even have enough data to train a model like GPT5. I think it is important that we understand the data sets and make sure that we don't create propaganda spewing bots, but at the same time people are already going to claim one model or another isn't right and make it political. You can create hate speech and all kinds of things now with these models and these laws won't change that. You have dark web-based models for hackers. These laws sound great in principle but at the end of the day they aren't going to matter because the genie is already out of the bottle and there is no way to put it back in. Companies are going to start looking at smaller models that are more cost effective and specifically trained for their use case rather than worry about huge models whose cost they can't afford. This order doesn't stop any of that which is the direction people will be headed.”