Radio host sues ChatGPT developer over allegedly libelous claims

Lawsuit claims OpenAI is responsible for the chatbot's 'false and malicious' answers.
Smartphone showing ChatGPT logo resting on laptop with ChatGPT home page displayed
Legal experts aren't sure what happens when a non-human entity is involved in a libel suit. Deposit Photos

Share

ChatGPT libeled a Georgia radio host by inaccurately claiming he was accused of defrauding and embezzling money from the Second Amendment Foundation (SAF), a lawsuit filed Monday in Georgia state court claims.

The host, Mark Walters, sued artificial intelligence company OpenAI in Gwinnett County Superior Court, arguing that the statements made by the company’s large language model, ChatGPT, were “false and malicious.” The chatbot also stated that Walters was the foundation’s treasurer and chief financial officer, when in reality he “has no employment or official relationship with SAF,” the lawsuit says.

According to Gizmodo, this is “a first-of-its-kind libel lawsuit,” although previous instances of ChatGPT “hallucinatingquestionable and inaccurate biographical information are many. For example, earlier this year, ChatGPT incorrectly claimed that law professor Jonathan Turley was accused of sexual harassment by a student

In Walters’ case firearms journalist Fredy Riehl asked ChatGPT to summarize a recent real lawsuit, The Second Amendment Foundation (SAF) v. Robert Ferguson. After Riehl gave ChatGPT a link to the case, ChatGPT generated a response with descriptions of accusations against Walters, including embezzlement, misappropriation of funds, and manipulation of financial records. Upon fact-checking the accusations, Reihl “confirmed that they were false,” the lawsuit says. When confronted, ChatGPT repeated and expanded the allegations, even providing an “Exhibit 1,” which was “a complete fabrication and bears no resemblance to the actual complaint, including an erroneous case number.” Walters is not mentioned in The Second Amendment Foundation (SAF) v. Robert Ferguson, which is not even about financial accounting claims. “What struck me the most in this occurrence was the level of detail, including names of people, law firms, and organizations I was familiar with,” Riehl told PopSci Wednesday. “It made me question what I thought I knew about them.”

Walters is now seeking unspecified damages to be determined at trial, as well as any other relief the court and jury decides to award. Legal experts told Gizmodo that the outcome of this specific case is uncertain because Walters must prove OpenAI knew the information was false or had reckless disregard for the truth, as well as “actual malice” for ChatGPT’s actions to be deemed libel. ChatGPT itself has no consciousness, and OpenAI and similar companies offer disclaimers about the potential for their generative AI to provide inaccurate results. However, “those disclaimers aren’t going to protect them from liability,” Lyrissa Lidsky told PopSci. Lidsky, the Raymond & Miriam Ehrlich Chair in US Constitutional Law at the University of Florida Law School, believes an impending onslaught of legal cases against tech companies and their generative AI products is a “serious issue” that courts will be forced to reckon with.

To Lidsky, the designers behind AI like ChatGPT are trying to have it both ways. “They say, ‘Oh, you can’t always rely on the outputs of these searches,’ and yet they also simultaneously promote them as being better and better,” she explained. “Otherwise, why do they exist if they’re totally unreliable?” And therein lies the potential for legal culpability, she says.

Lidsky believes that, from a defamation lawyer’s perspective, the most “disturbing” aspect is the AI’s repeatedly demonstrated tendency to wholly invent sources. And while defamation cases are generally based on humans intentionally or accidentally lying about someone, the culpability of a non-human speaker presents its own challenges, she said.

According to Lidsky, there is no easy solution to the growing problem, but eventually it will likely require a “thoughtful response” from legislative bodies. She offers the possibility of a legal protection for companies that state they aren’t liable for AI-generated defamation so long as they take steps to correct false accusations as soon as they are notified of their existence. Lidsky, however, doesn’t place much faith in political actors to “respond sensitively” to these dilemmas, which entail a great deal of technical knowledge about large language models, as well as defamation law precedent.

“On the one hand we want to promote innovation, on the other hand you’re putting out a tool that invents lies about people that can harm them,” Lidsky said. “How do you balance the two sides of that equation?” 

PopSci reached out to both Walters and OpenAI for comment, and will update this article if they respond.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.