OpenAI sued over false statements generated by ChatGPT
Image via Leon Neal/Getty Images
ChatGPT owner OpenAI is facing a defamation lawsuit that could impact the legal definitions around the production of false information by AI programs and the application of Section 230 to their statements.
Armed America Radio host Mark Walters is suing OpenAI alleging that ChatGPT false statements regarding Walters’ relationship and practices with gun rights nonprofit the Second Amendment Foundation.
The issue spurned from the use of ChatGPT by journalist Fred Riehl while covering a legal complaint filed by the SAF against Washington attorney general Robert Ferguson in relation to an investigation by Ferguson’s office into SAF and other gun rights organizations. Riehl asked ChatGPT to summarize the legal filing and the result including multiple statements about Walters even though he wasn’t mentioned in SAF’s complaint.
ChatGPT’s generated statement alleged that Walters defrauded and embezzled $5 million in funds from SAF during his time as the nonprofit’s treasurer and chief financial advisor. According to Walters’ lawsuit, he never held either position with the organization, nor was he ever employed by SAF. When Riehl asked for specific passages from SAF’s complaint mentioning Walters and the complaint’s full text, ChatGPT produced information that Walter’s filing labeled a “complete fabrication.”
OpenAI acknowledges that ChatGPT can produce false information in its generated statements, which the company calls “hallucinations,” and includes disclaimers on the ChatGPT homepage warning of such statements occurring. Yet the company still describes the AI program as a way to “get answers” and “learn something new,” according to The Verge. Riehl never published the chatbot’s statements.