Truth and lies in an artificial sense.
By Tim Nodar, CyberWire senior staff writer
Oct 25, 2023

Artificially intelligent, and (therefore?) susceptible to corruption.

Truth and lies in an artificial sense.

A proof-of-concept shows AI's potential for automated social engineering.

Machine-created phishing templates.

Researchers at IBM X-Force Red outline ways in which legitimate generative AI tools like ChatGPT can be tricked into creating malicious output like phishing email templates: “With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes....It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.”

The researchers tested the AI generated phishing lure against a template crafted by humans, and found that the human-made template was slightly more effective at deceiving recipients.

Regrettable, but foreseeable: of course AI will be turned to criminal purposes.

Dror Liwer, co-founder of cybersecurity company Coro, sees the verification of identity as the appropriate counter to automated phishing. “We have seen, as predicted, Generative AI being used to perfect the content distributed through phishing emails. The focus must remain on the impersonation aspect of phishing which renders the content irrelevant. We need to verify senders and embedded links which will eliminate the need to worry about how convincing the text might be.”

It's not just the initial come-on; it's the subsequent interaction. Roger Grimes, data-driven defense evangelist at KnowBe4, is struck by the plausible fluency with which the AI could answer suspicious prospects. “I think nearly every anti-phishing firm has confirmed that AI can be used to construct realistic phishing emails, so that's no surprise. What was surprising about our research was how useful AI could be to an attacker that gets contacted with questions by a potential victim. Many attackers are doing so from foreign countries and are often not experienced in the industries they are attacking. AI helps level that field. AI can help an attacker reply in the victim's native language without a lot of the grammar errors you see in today's attacks, and if targeting a particular industry, use industry jargon that makes the phishing attack seem more real."

Grimes sees these demonstrations as harbingers of more successful automation. “AI will for sure increase not only the volume of phishing attacks, but the quality. AI will lead to more successful phishing attacks. The question is how much more? Phishing is already involved in 70% - 90% of today's successful digital attacks. It's already pretty bad. How much worse will AI make it? We are worried that AI will make it a lot worse with a lot more victims. But could it be that phishing is already pretty bad and as much as AI-enabled phishing attacks do increase successes, it only moves the needle a bit more (i.e., 75% - 95% instead of 70% to 90%). That's the big question. How much more worse does phishing get because AI when phishing is already a huge, huge problem, even without AI. And it doesn't factor in the very relevant fact that the good guys are increasingly using AI to fight back. KnowBe4's been using AI-enabled technology for over 10 years. We know that our AI-enabled technology improves the educational experience for customers and decreases cybersecurity risk. It isn't like AI is just being used by the bad guys. The good guys invented it and have been using it even longer. The question is how the increased use of AI by the good side ends up compared to the increase in AI used by the bad side? Who gets the bigger benefit? I wouldn't absolutely bet that AI only benefits the attacker.”

Emily Phelps, Director at Cyware, thinks the humans will only retain the advantage for so long.“Generative AI is a huge tool for adversaries to expedite common threat tactics such as phishing. Although humans may have the edge for now, AI technologies are improving with each passing day. The time to prepare for these evolving tactics is now. We can no longer rely on poor grammar and typos to clue us in to phishing emails so we must bolster regular security awareness training. Organizations must strengthen security controls to better validate who can access data. As adversaries continuously adapt their tactics, organizations must as well, updating threat detection, improving threat intelligence orchestration, and maintaining vigilance across all levels to defend against today's threats.”