OpenAI sued over false statements generated by ChatGPT

Tech News ChatGPT
OpenAI sued over false statements generated by ChatGPT

ChatGPT owner OpenAI is facing a defamation lawsuit that could impact the legal definitions around the production of false information by AI programs and the application of Section 230 to their statements.

Armed America Radio host Mark Walters is suing OpenAI alleging that ChatGPT false statements regarding Walters’ relationship and practices with gun rights nonprofit the Second Amendment Foundation.

The issue spurned from the use of ChatGPT by journalist Fred Riehl while covering a legal complaint filed by the SAF against Washington attorney general Robert Ferguson in relation to an investigation by Ferguson’s office into SAF and other gun rights organizations. Riehl asked ChatGPT to summarize the legal filing and the result including multiple statements about Walters even though he wasn’t mentioned in SAF’s complaint.

ChatGPT’s generated statement alleged that Walters defrauded and embezzled $5 million in funds from SAF during his time as the nonprofit’s treasurer and chief financial advisor. According to Walters’ lawsuit, he never held either position with the organization, nor was he ever employed by SAF. When Riehl asked for specific passages from SAF’s complaint mentioning Walters and the complaint’s full text, ChatGPT produced information that Walter’s filing labeled a “complete fabrication.”

OpenAI acknowledges that ChatGPT can produce false information in its generated statements, which the company calls “hallucinations,” and includes disclaimers on the ChatGPT homepage warning of such statements occurring. Yet the company still describes the AI program as a way to “get answers” and “learn something new,” according to The Verge. Riehl never published the chatbot’s statements.

This isn’t the first time false statements made by ChatGPT have generated potential legal issues around defamation. Brian Hood, mayor of the Australian town of Hepburn Shire, threatened to sue OpenAI earlier this year after ChatGPT falsely claimed he was imprisoned on bribery charges in the early 2000s. Unlike Walters, Hood’s threat of legal action came with a request that OpenAI correct the falsities to avoid going to court.

Walters made no such request, choosing instead to sue for unspecified monetary damages. As law professor Eugene Volokh, who has written extensively about the legal liability of AI systems, explained in a blog post, this could impact the strength of Walters’ case.

Here, it doesn’t appear from the complaint that Walters put OpenAI on actual notice that ChatGPT was making false statements about him, and demanded that OpenAI stop that,” Volokh said. “And there seem to be no allegations of actual damages—presumably Riehl figured out what was going on, and thus Walters lost nothing as a result.”

The larger implication of the lawsuit regards whether false and defamatory statements generated by AI systems fall under the protections outlined in Section 230, which precludes online platforms from legal liability for any third-party-produced information that is hosted on them. Walters’s case is believed to be the first of its kind, which means there is little to no legal precedence for holding a company that operates an AI system legally responsible for what said system generates.

Volokh told Ars Technica that Section 230 “doesn’t immunize defendants who ‘materially contribut[e] to [the] alleged unlawfulness’ of online content.” He added, “An AI company, by making and distributing an AI program that creates false and reputation-damaging accusations out of text that entirely lacks such accusations, is surely ‘materially contribut[ing] to [the] alleged unlawfulness’ of that created material.” He also said that ChatGPT’s disclaimers potentially couldn’t be sufficient to shield OpenAI from liability.

On the other hand, the fact that ChatGPT is known to generate false information may not be enough to tip things in Walters’ favor. “I don’t think this general knowledge is sufficient, just like you can’t show that a newspaper had knowledge or recklessness as to falsehood just because the newspaper knows that some of its writers sometimes make mistakes,” Volokh said. “For liability in such cases (again, absent actual damages to a private figure), there has to be a showing that the allegedly libelous ‘statement was made with ‘actual malice’—that is, with knowledge that it was false or with reckless disregard of whether it was false or not.’ And here no one at OpenAI knew about those particular false statements, at least unless Walters had notified OpenAI about them.”

OpenAI has not responded to requests for comment regarding Walters’ lawsuit.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin