First-in-kind libel lawsuit filed against OpenAI

June 8, 2023
OpenAI Libel

In a landmark lawsuit, a resident of Georgia, Mark Walters, is taking legal action against OpenAI, creators of the AI chatbot, ChatGPT, accusing them of defamation. 

The case, believed to be the first of its kind, questions AI’s responsibility for the dissemination of false information. 

The dispute revolves around a journalistic investigation by Fred Riehl – the editor-in-chief of the gun publication AmmoLand. Riehl was researching a real case, Second Amendment Foundation (SAF) vs Ferguson, where the SAF – an American non-profit that supports gun rights – was accusing Attorney General Bob Ferguson of abusing his power to denounce gun rights. Alan Gottlieb, founder of the SAF, is one of the plaintiffs.

This is where ChatGPT comes in, returning false information about Alan Gottlieb suing Walters for “defrauding and embezzling funds” from the SAF. Here’s the complete response:

“Defendant Mark Walters (‘Walters’) is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Office of SAF since at least 2012. Walters has access to SAF’s bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF’s board of directors. Walters owes SAF a fiduciary duty of loyalty and care, and is required to act in good faith and with the best interests of SAF in mind. Walters has breached these duties and responsibilities by, among other things, embezzling and misappropriating SAF’s funds and assets for his own benefit, and manipulating SAF’s financial records and bank statements to conceal his activities.”

According to Walters’ lawsuit, “Every statement of fact in the summary pertaining to Walters is false.” Walters wasn’t involved in the Ferguson case, nor was he a member or employee of the Second Amendment Foundation. 

Walters’ complaint argues, “ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule.” 

Walters also argued that Riehl should have known ChatGPT was untrustworthy on such matters, so he should have disregarded the AI’s response. 

Does Walters have a chance?

Eugene Volokh, a UCLA professor currently researching AI legal liability for outputs, told Gizmodo, “such libel claims are in principle legally viable. But this particular lawsuit should be hard to maintain.” 

Notably, the defamatory comments weren’t officially published. Volokh says, “There seem to be no allegations of actual damages—presumably Riehl figured out what was going on, and thus Walters lost nothing as a result.” 

To win damages, Walters would have to prove OpenAI’s output exhibited “knowledge of falsehood or reckless disregard of the possibility of falsehood,” also known as “actual malice.”

Or in other words, AI hallucinations are just that – hallucinations, albeit not of the benign variety in this case. They’re not intended with malice or knowledge of reckless disregard.

This isn’t the only time ChatGPT has falsified legal information, either, as it wasn’t long ago a New York judge recently faced punishment for researching cases with the AI which were also entirely falsified. 

Volokh is critical of how challenging it is to sue AI companies in such cases. He says, “OpenAI acknowledges there may be mistakes but [ChatGPT] is not billed as a joke…It’s billed as something that is often very reliable and accurate.”

There are also queries about chatbots and Section 230 of the Communications Decency Act, which essentially protects internet companies from returning potentially defamatory information from queries.

Timeline of ChatGPT legal scenarios 

Since April, OpenAI has faced three accusations of false representation through its AI model.

  1. Brian Hood case: Brian Hood, the regional mayor of Hepburn Shire in Australia, threatened to sue OpenAI in April. ChatGPT falsely implicated him in a bribery scandal, labeling him a convicted criminal. In reality, Hood was the whistleblower who exposed the corruption.
  2. Jonathan Turley case: A similar issue arose with Jonathan Turley, a law professor at George Washington University. According to Turley, ChatGPT had falsely accused him and several other professors of sexual harassment. The AI model reportedly concocted a Washington Post story and fabricated quotes to substantiate the claims. This incident highlighted a growing problem with generative AI models producing false quotes and citations.
  3. Use in legal briefs: A recent case involved a lawyer who included in a legal brief what a judge determined to be “bogus judicial decisions” generated by ChatGPT. The lawyer was representing a client suing an airline and faced his own court case as a result. 

Top legal professionals are still trying to wrap their heads around the legal implications of generative AI’s outputs. AI is challenging legal definitions surrounding libel, defamation, copyright and intellectual property. 

Since these AIs are available in many jurisdictions, the legalities involved in their regulation and governance will likely become exceptionally tough to unravel, and there are allegedly many other cases in the pipeline. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions