Who're you gonna believe, the AI or your own lyin' eyes (or ears, nose, etc.)?
AI hallucinations, in coding, defamation, and legal citations.
AI is susceptible to hallucinations, and those have been both demonstrated and, now, litigated.
ChatGPT “hallucinations.”
Researchers at Vulcan Cyber warn that attackers can use ChatGPT to trick developers into installing malicious packages.
Noting that developers have begun using ChatGPT for coding assistance, the researchers state, “We’ve seen ChatGPT generate URLs, references, and even code libraries and functions that do not actually exist.” Such articles are the “hallucinations” referred to. “These LLM (large language model) hallucinations have been reported before and may be the result of old training data,” Vulcan continues. “If ChatGPT is fabricating code libraries (packages), attackers could use these hallucinations to spread malicious packages without using familiar techniques like typosquatting or masquerading. Those techniques are suspicious and already detectable. But if an attacker can create a package to replace the ‘fake’ packages recommended by ChatGPT, they might be able to get a victim to download and use it.”
Mike Myer, CEO of Quiq, offered the following observations:
“Large language models carry risk and limitations for companies. Outputs can be biased, wrong, or flat-out invented. We are seeing lots of interest from brands who want to take advantage of ChatGPT but don't know how to do it internally. No company can risk the legal and PR nightmare of notoriously wrong answers being provided to their customers.
“Yet the larger problem is that ChatGPT knows only information that is publicly available on the Internet. Customer service agents often use a company’s confidential information to answer questions. And to answer questions specific to a customer’s accounts, agents or bots need customer data to draw from.”
Hallucinations now cited in litigation.
Georgia radio host Mark Walters is suing OpenAI LLC for defamation after ChatGPT allegedly generated an answer that falsely stated that Walters had been sued for fraud and embezzlement, Bloomberg Law reports. The “hallucinated” result was generated for a journalist covering a case unrelated to Walters.
The lawsuit states, “ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule.”
In a separate case, two lawyers are facing potential sanctions in the Southern District of New York after they used phony legal research generated by ChatGPT, the Associated Press reports. The lawyer who included the fictitious research in their court filing apologized, stating that he “did not comprehend that ChatGPT could fabricate cases.”