Generative AI is now a tool for malicious social engineering.
By Jason Cole, CyberWire staff writer.
Jun 22, 2023

Fraudsters are using generative AI for their schemes.  


Generative AI is now a tool for malicious social engineering.

Sift has released its Q2 2023 Digital Trust and Safety Index which focused on “Fighting fraud in the age of AI automation” and discussed the use of generative AI in social engineering schemes and the fears from consumers surrounding the new technology. The fears aren’t entirely groundless. “In the last six months, 68% of consumers noticed an increase in the frequency of spam and scams, likely driven by the surge in AI-generated content. And Sift data shows a 40% increase in the average rate of fraudulent content blocked from the network in Q1 2023 vs. the entirety of 2022. This trajectory is only expected to continue.” The threat associated with AI is that it lowers the barrier to entry for fraud and social engineering scams. 

Non-native (or poorly skilled native) speakers benefit.

Generative AI allows a user to have an error free text conversation in any language which will prevent the average user from immediately noticing something is up. This opens a door to phishing schemes which are less easy to detect with the naked eye. “The emergence of AI-generated emails impersonating executives, coupled with employees’ poor password hygiene and low reporting rates, make these scams a significant—and fast-growing—risk for businesses,” Sift writes. Furthermore, Sift explains, this isn’t something that is going to happen at some distant point in the future; rather it’s happening now, and will probably become more common. Sift has already noticed a 66% increase in blocked transactions and content from the same fraudsters on Sift networks from Q4 2022 to Q1 2023.

Suggestions for a defensive response.

With this emerging threat, Sift calls on companies to employ real time fraud protection which is capable of detecting more than the usual red flags associated with fraud. Some experts recommend fighting fire with fire and using AI to determine fraudulent activity and potentially to discover AI generated content used for phishing. Sift writes, “Strong defenses and a future-forward fraud strategy can help prevent the threat of AI and bot-fueled ATO and payment fraud. Businesses that only focus on a few known red flags won’t be able to keep up with evolving risk. Instead, they must look at transactions holistically to differentiate between fraudulent and legitimate activity. Businesses need a comprehensive, real-time solution to keep up with fraudsters who are leveraging more dangerous and easily-accessible tools.”