Tech leaders call for a pause in advanced AI research.
N2K logoMar 30, 2023

Over 1,000 tech leaders signed an open letter, urging a pause to the development of advanced AI technology.

Tech leaders call for a pause in advanced AI research.

Elon Musk, Steve Wozniak, and Andrew Yang are all among those who’ve signed an open letter urging for a slowdown in the development of AI technology. The letter warns of the danger that they believe advanced AI poses to humanity. (But some critics disagree.)

An open letter to pause AI development.

The letter begins by asserting that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.“ The letter calls for a pause of at least six months on the training of AI systems more powerful than GPT-4. The letter emphasizes that this pause should be used for development of existing AI interfaces, to make them “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” Also considered is a need for AI developers to work with policymakers to implement regulations on AI. Dark Reading reports that even proponents of AI development, like the chief executive of OpenAI, shared concerns about “AI's ability to both spread disinformation and launch cyberattacks.”

Critics of the letter raise questions.

Wired reports that Microsoft and Google did not respond to requests for comment on the letter. A spokesperson for OpenAI, Hannah Wong, emphasizes that more than six months were spent working on the safety of GPT-4 after the training of the model, and notes that the company is not currently training GPT-5. Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, signed the letter, but then retracted his signature a day later after debate about what demands are necessary to make at this time.

Chris Doman, CTO & Co-Founder at Cado Security, thinks some of the signatories, at least, may be motivated in part by a concern for their commercial interests in addition to their fellow-feeling for the human future: 

“The open letter is aggressive in explicitly suggesting governments should step in to ban the training of models more powerful than GPT-4. This puts it on the beginning of a track to control the technology in the same way nuclear proliferation is. The difference, however, is AI training can be performed on commodity hardware (albeit in massive amounts) and is therefore harder to control. 

"Ultimately the intention here may be more to put pause on OpenAI's progress through pressure rather than law. We also have to be a little suspicious of the intentions here - many of the authors of the letter have commercial interests in their own companies getting a chance to catch up with OpenAI's progress. Frankly, it's likely the only company currently training an AI system more powerful than GPT-4 is OpenAI, as they are currently training GPT-5.”

ChatGPT: easy access for a layman.

Michaël Lakhal, Director of Product Management at OneSpan, notes the ease in which the average person (and by extension, the average hacker) can use AI for nefarious purposes:

“The technology behind ChatGPT is not new but its open accessibility has made it much easier for the average consumer to leverage the platform. However, this has also made it easier for threat actors and hackers to utilize the technology for nefarious purposes. Hackers can utilize the technology to create more sophisticated phishing emails - mimicking brands and tone or more easily translating copy into several languages - making them more difficult to identify and easily connecting hackers with global audiences. Further, with ChatGPT, the average person with limited technical skills can become a hacker by asking the platform to write malicious code. While Open AI, the creators of ChatGPT, have put in safeguards to prevent the chatbot from answering these questions, it’s been proven that there are easy ways to work around this programming. Essentially, ChatGPT — in the wrong hands — could serve as a ‘how-to’ guide for potential and existing hackers, providing resources and pointers on how to improve and hone your skills. 

"Though ChatGPT makes it easier to carry out attacks, these methods aren’t new so the solutions remain the same. For phishing attacks, business leaders need to set up clear policies, educate all employees, implement thorough and continuous authentication policies, set up anti-phishing alerts, and avoid sharing sensitive information over easily hacked mediums like email and SMS. Similar methods should be employed for malware as this malicious code is often delivered and distributed through methods such as phishing. While ChatGPT isn't creating a brand new threat, it has the potential to drastically increase the number of malicious events and actors, so organizations need to remain extra vigilant in their ongoing security practices.”

Other industry comment on the open letter's animadversions about advanced AI.

(Added, 12:24 PM, March 30th, 2023. Two other OneSpan executives contacted us late this morning with comment on the issues the letter raises. Will LaSala, Field CTO, wrote, “No one in the industry understood just how transformative generative AI was going to be. As we look back at it, we can see that all AI before generative AI was just an infancy test stage. Analyzing data and providing rule-based responses were the training wheels of the AI industry. Hackers first started to see some of the power of AI as they used it to understand user patterns in order to reduce the word lists they needed for phishing and account takeover. I have already seen people advertising over 10k use cases for ChatGPT, which you can read and download in a paid blog. I think we are just scratching at the surface.”

Frederik Mennes, Director Product Management & Business Strategy, thinks it's naive to expect such technological advance to be stopped by fiat. “Although the development of generative AI is going faster and has a broader impact than expected, it’s naive to believe the development of generative AI can be stopped or even paused. If development would be stopped in the US, other regions will simply catch up and try to take the lead. From that perspective, there is also a geopolitical element. If there is a need for regulation, it can be developed in parallel. But the development of the technology is not going to wait until regulation has caught up. It’s just a reality that technology develops and regulation catches up.”)

(Added, 5:45 PM, March 30th, 2023. Dan Shiebler, Head of Machine Learning at Abnormal Security, is struck by the range of the signatories' backgrounds, but he also doubts that the letter will have much effect. "The interesting thing about this letter is how diverse the signers and their motivations are. Elon Musk has been pretty vocal that he believes AGI (computers figuring out how to make themselves better and therefore exploding in capability) to be an imminent danger, whereas AI skeptics like Gary Marcus are clearly coming to this letter from a different angle," he wrote. "In my mind, technologies like LLMs present two types of serious and immediate risks. The first is that the models themselves are powerful tools for spammers to flood the internet with low quality content or for criminals to uplevel their social engineering scams. At Abnormal we have designed our cyberattack detection systems to be resilient to these kinds of next-generation commoditized attacks.

"The second is that the models are too powerful for businesses to not use, but too unpolished for businesses to use safely. The tendency of these models to hallucinate false information or fail to calibrate their own certainty poses a major risk of misinformation. Furthermore, businesses that employ these models risk cyberattackers injecting malicious instructions into their prompts. This is a major new security risk that most businesses are not prepared to deal with.

"Personally, I don’t think this letter will achieve much. The cat is out of the bag on these models. The limiting factor in generating them is money and time, and both of these will fall rapidly. We need to prepare businesses to use these models safely and securely, not try to stop the clock on their development.")