Recent generative AI has built-in checks against malicious use, but the tools built on older, open-source foundations are freely adaptable to criminal purposes.
WormGPT, an "ethics-free" text generator.
Researchers at SlashNext describe a generative AI cybercrime tool called “WormGPT,” which is being advertised on underground forums as “a blackhat alternative to GPT models, designed specifically for malicious activities.” The tool can generate output that legitimate AI models try to prevent, such as malware code or phishing templates.
Generating convincing text for social engineering.
SlashNext asked WormGPT to write an email “intended to pressure an unsuspecting account manager into paying a fraudulent invoice.” The researchers state, “The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks. In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.”
A criminal tool built on open-source models that predate built-in checks.
Dan Shiebler, Head of Machine Learning at Abnormal Security, commented that legitimate generative AI tools have built-in checks against this kind of criminal use. “The most common Generative AI tools like ChatGPT, Google Bard, and Claude have explicit checks built in to prevent abuse and malicious use by threat actors," Shiebler wrote. "These checks cannot be avoided," he added, "because the tools work by sending users’ prompts to OpenAI (for ChatGPT), Google (for Bard) and Anthropic (for Claude), who then run the prompts through a series of checks in their models, before sending the output back to the user. Attackers can trick these checks, but it’s fairly difficult to do. Tools like WormGPT, on the other hand, use open source models like LLAMA and GPTJ. Users run these models by downloading them to their own computers, which allows them to remove the check process entirely – they don’t need to be particularly savvy or do any work to trick the checks, like they would with a tool like ChatGPT. This means there are no limits on the kind of content it could produce.GPTJ, which is what WormGPT is built on, has been around since 2021, so cybercriminals have likely already been using it for years.”