Scams with a ChatGPT theme.
N2K logoApr 20, 2023

Palo Alto’s Unit 42 has observed marked increases in phishing and social engineering scams impersonating ChatGPT.

Scams with a ChatGPT theme.

Palo Alto Networks’ Unit 42 wrote today about their observations of increased malicious activity impersonating ChatGPT. The hackers have been seen creating sites claiming to be OpenAI, and attempting to trick users into sharing personal information, or even in some cases, paying for the ChatGPT service.

Phishing sites impersonate OpenAI.

From November of last year through early this month, the researchers observed a 910% increase in web domains that were ChatGPT-related. They continue, saying that “In this same time frame, we observed a 17,818% growth of related squatting domains from DNS Security logs.” Detections of around 118 ChatGPT-related malicious URLs were also caught daily by the company’s URL filtering system. The faux sites are said to be reminiscent of OpenAI’s legitimate site, but seek to exfiltrate user data, or even attempt to make the user pay for ChatGPT (which, when legitimately used through OpenAI, is free).

Copycat chatbots: not to be trusted.

Before the release of ChatGPT’s API, as well as some restrictions around where ChatGPT is accessible, several open-source products have been observed providing automation tools to connect to the ChatGPT service. Regardless of the amount of money involved (even if that may be none at all), these aren’t trustworthy methods to access ChatGPT, and are usually based on 2020’s GPT-3, when GPT-3.5, as well as GPT-4 have been released since.

Best practices for protecting against ChatGPT-related scams.

Researchers advise suspicion around emails and sites related to ChatGPT, due to the increase in these attacks. They also advise only accessing ChatGPT through the official OpenAI website. Utilizing security software, such as URL filtering and the like, can also aid in protection against these types of scammers.