At a glance.
- Apple says it won’t uphold proposed amendments to UK’s Investigatory Powers Act.
- Columbia to stand up a national cybersecurity body.
- Nominee to lead NSA and CYBERCOM speaks on Section 702 and AI.
- Big Tech signs on to the White House AI policy.
Big Tech signs on to the White House AI policy.
The White House this morning announced that seven major tech companies, all of them significant players in the young but burgeoning research-and-development field of artificial intelligence (AI), have agreed to work under principles intended to preserve safety, security and trust in the AI systems they develop. The White House enunciated the principles under which work on AI should proceed:
- "Ensuring Products are Safe Before Introducing Them to the Public"
- "The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
- "The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration."
- "Building Systems that Put Security First"
- "The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered."
- "The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly."
- "Earning the Public’s Trust"
- "The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception."
- "The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
- "The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them."
- "The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all."
It's an international effort. The White House says the US has been actively consulting partners in Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
Mike Britton, CISO of Abnormal Security, wrote to urge that concern about the transmission of human prejudice to AI not obscure the initiative's deep interest in cybersecurity proper. "While some of the new policies are around biases, there’s a strong push for cybersecurity as well. I think ethics, transparency, and the ability for a human to 'override' or intervene with the AI are key (these are similar to the components outlined in the EU’s AI Act)." Britton added that, of course, we can expect AI to be put to malicious use:
"I think regulation will make it easier for enterprises to embrace its use (for cybersecurity or otherwise). But today’s threat actors are savvy, and will likely still be able to figure out ways around safeguards. We’ve already seen this with the emergence of tools like WormGPT. ChatGPT, Google Bard, and Claude have explicit checks built in to prevent abuse and malicious use by threat actors – they work by sending users’ prompts to OpenAI (for ChatGPT), Google (for Bard) and Anthropic (for Claude), who then run the prompts through a series of checks in their models, before sending the output back to the user. WormGPT, on the other hand, uses open source models like LLAMA and GPTJ. Users run these models by downloading them to their own computers, which allows them to remove the check process entirely. This means there are no limits on the kind of content it could produce.
"This is a real concern. There was another GPT-like tool that was discovered that seeded disinformation in its responses – not consistently, but randomly, so that the user couldn't really determine what was true or not true. I think there's still the risk of bad models and model poisoning, but I don't think that risk is as high with the big players in the space. The most significant regulation will be around ethics, transparency and assurances in how the AI operates, and having some mechanism that still requires a human component. Any good AI solution should also enable a human to make the final decision when it comes to executing (and potentially undoing) any actions taken by AI."
Rob Vamosi, Senior Security Analyst at ForAllSecure, sees both the urgency of the issues that surround AI as well as such systems' potential for deliberate abuse. "The statement this morning from the Biden-Harris administration underscores the urgency in addressing fundamental concerns around AI," he said. "With emergent AI systems there is a risk for massive data leaks, not only of the data that’s being fed into these databases but also leaks associated with the release of the weights and biases, the learnable parts of the AI model. It is incumbent upon the AI industry to safeguard these systems today by testing, reporting, and fixing software vulnerabilities quickly. That said, there needs to be more guidance around what testing is necessary to safeguard AI. Unfortunately with any tool, AI can and will be used both offensively and defensively. Already we are seeing ChatGPT being used by malicious actors. It then becomes a cat and mouse game, with one trying to stay one step ahead of the other. To mitigate the potential for misinformation, the Biden-Harris administration is calling for watermarks on the AI products so it is possible to determine the integrity of the results."
James Campbell, CEO of Cado Security, sees a range of complex and nuanced issues surrounding voluntary standards for AI. "There has been a lot of discussion around using AI technologies to assist in developing malware or enabling adversaries to conduct more sophisticated attacks. Despite this discourse, there doesn't appear to have been much public reporting of AI-enabled cyberattacks. There has been some reporting of 'deepfake' technologies (classed as AI) used for misinformation and scamming purposes. These pose a more immediate threat than AI-enabled malware or cyberattacks. With this in mind, any specific cybersecurity concerns behind these policies are likely speculative at this moment in time and are potentially focused on future readiness. Companies working within the AI space will need to ensure that they can meet the cybersecurity standards as laid down by the regulator, this may be seen as a method to constrain the speed of advancement in the area as companies are less likely to have the same cybersecurity controls as compared to Banks or other highly regulated sectors. These controls will inevitably have an impact on how development teams work.
"The 'testing' referred to in the release is likely around both the internal security of AI developers and the broader societal impact of the technologies themselves. There is a lot of potential for privacy issues arising from the use of AI technologies, especially around Large Language Models (LLMs) such as ChatGPT. OpenAI themselves disclosed a vulnerability in ChatGPT on May 23, which inadvertently provided access to other users' conversation titles. Clearly this has serious data security implications for users of these LLMs. More generally, companies may be asked to conduct a risk assessment from a societal impact perspective prior to releasing AI-enabled technologies.
"With LLMs in particular, guardrails around the content of prompts appear to have been implemented since the early stages of development. This will be clear to users who have attempted to ask public LLMs for information that could be used in a nefarious manner. However, much like any computing system, there will always be some method of circumventing these protections. What follows is the typical cat and mouse game of identifying vulnerabilities and remediating them. There most likely will never be a way to absolutely guarantee that such technologies will only be used for defensive purposes."
Some of the challenges AI presents are familiar from those that have long confronted other precincts of the Internet. Campbell said, "Public awareness around validation of information sourced from the Internet is a must. An issue with LLMs in particular is that they deliver information in an authoritative manner which is often incorrect (described as 'hallucinations'). This results in users of these LLMs believing they have intimate knowledge of a subject area even on occasions where they've been misled. Users of LLMs need to approach the results of their prompting with a large dose of scepticism and additional validation from an alternative source. Government guidance to users should emphasise this until the results are more reliable.
And, of course, Campbell argues that any voluntary system be properly incentivized. "For the voluntary system to work, it needs to be advantageous for companies to sign up i.e. there needs to be incentives for participating organizations. This could be in the form of certification in regards to signing up to the series of safeguards and additional cyber security measures. Further, it needs to be clear which companies sign up to these additional measures and be seen as 'trusted.' At the same time, things need to be proportionate to the risk and size of the business. For example, smaller companies will likely not have the same level of resources when compared to larger ones and this stifles potential innovation and competition in the space. In short, whatever is implemented needs to be balanced so that companies with good intentions that participate don't have to compromise on innovation."