Lawyer caught generating false legal cases with ChatGPT

May 28, 2023

AI legal research

New York lawyer, Steven A Schwartz, used ChatGPT to research several ‘cases’ that turned out to be false. 

Schwartz’s case involved a man suing the Colombian airline Avianca. The plaintiff worked with a legal team, Levidow, Levidow & Oberman, who prepared a legal brief referencing entirely false legal cases. 

Schwartz asked ChatGPT for assurance that the cases were real, but he only attempted to cross-check one of the cases in detail, and ChatGPT assured him it could be found in the Westlaw and LexisNexis databases. On that basis, Schwartz assumed the other cases were real, too. 

It was later revealed that only one case was real, Zicherman v. Korean Air Lines Co., 516 U.S. 217 (1996), and ChatGPT misquoted the date and several other details.

After the plaintiff’s team submitted the brief, one of the recipients, US District Judge Kevin Castel, stated, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

One of the court dockets says, “The Court is presented with an unprecedented circumstance. A submission filed by the plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases.”

AI legal cases
The court’s reply to the false cases submitted by Schwartz. Source: Court Listener.

Schwartz, who has 30 years of experience as an attorney, pleaded this was an innocent mistake, telling the Court he “greatly regrets” using ChatGPT for research and was “unaware that its content could be false.” He also conceded to using ChatGPT in other legal cases. 

Schwartz, who was acting on behalf of Peter LeDocu, a colleague at the same law firm, is due to appear before the Court on June 8 to explain why he and his law firm shouldn’t be sanctioned.

When ChatGPT checks ChatGPT

OpenAI is very clear that ChatGPT is vulnerable to misrepresenting the truth, but the AI can come across as confident when providing contextually relevant ‘examples’ that aren’t factually correct – also called “hallucinating.” 

This is an issue in academia too, where ChatGPT often generates false references, sometimes going as far as manufacturing realistic studies and experiments which never happened.

Many universities have released statements highlighting this. For example, Duke University states, “What you may not know about ChatGPT is that it has significant limitations as a reliable research assistant. One such limitation is that it has been known to fabricate or “hallucinate” (in machine learning terms) citations.” 

Analyzing references for inconsistencies has become a reliable way for tutors to catch students using ChatGPT to write essays. 

That’s precisely what happened to Schwartz – he was caught. He wasn’t the first, and he won’t be the last. He seemed genuinely ignorant, but ignorance doesn’t necessarily constitute a defense in court. 

Generating false legal citations is an alarming example of ChatGPT’s fallibility and serves as a potent reminder to check, double-check, and triple-check ‘facts’ touted by generative AIs.  

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions