National security and AI’s potential for disinformation and the management of violence.
N2K logoOct 31, 2023

President Biden's Executive Order on artificial intelligence focuses in large part on national security. It offers a framework that remains to be fleshed out in strategy and policy.

National security and AI’s potential for disinformation and the management of violence.

The Executive Order (EO) emphasizes the national security aspects of properly regulated artificial intelligence. “In accordance with the Defense Production Act,” the White House Fact Sheet explains, “the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.” 

Recognizing risk (it’s not always obvious).

But recognizing which models carry that sort of risk isn’t trivial, if only because AI is already in military and intelligence use, and has been for some time. An essay in Foreign Policy points out, “The mathematical foundations of AI are ubiquitous, the human skills to create AI models have widely proliferated, and the drivers of AI research and development—both human creativity and commercial gain—are very powerful.” 

Jeff Williams, co-founder and CTO at Contrast Security, wrote, “This EO seems to only apply to AI systems that pose a serious risk to national security and public health and safety. How are we to determine this?  Even an AI used to create social media posts will have incalculable effects on our elections.  Almost any AI could flood a critical agency with requests that are indistinguishable from read human requests.  They could be realistic voice mail messages, or videos of system damage that aren’t real.  The opportunities to undermine national security are endless.”

Disinformation as a special case of a threat to national security.

The EO charges the Commerce Department with developing “guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.” This would lead to a technical response to the ease with which AI might be exploited to generate and spread disinformation at scale.

“The challenge of AI-generated content is that real people need to easily confirm that it is authentic and originates from a trustworthy source – and the identity of the individual or organization was thoroughly verified by a trusted 3rd party,” wrote Lorie Groth, Director of Product Marketing at DigiCert. “Authenticity of any type of file or media object is based on a combination of (1) the level of assurance used to identify the individual or organization, (2) using trusted technology, such as PKI, to bind the digital asset to this information, and (3) the ability of a real person to easily validate that it is authentic and has not been manipulated.”