Donald Trump’s former lawyer, Michael Cohen, disclosed that he inadvertently provided his attorney with AI-generated false case citations.
These citations were mistakenly included in an official court filing.
Cohen explained this oversight in a sworn statement to a federal court in Manhattan, noting that Google Bard produced the citations.
He wasn’t aware then that generative AI is liable to generate misinformation, also called hallucinations.
The issue came to light when US District Judge Jesse Furman observed that three legal cases cited in Cohen’s request for an early termination of his supervised release were, in fact, non-existent.
Supervised release is a set period after serving a prison sentence during which an individual is monitored and must comply with specific conditions imposed by the court.
Judge Furman questioned Cohen’s lawyer, David Schwartz, as to why he should not face disciplinary action for citing these imaginary cases.
In his response, Cohen, who lost his legal license about five years ago following his conviction for various financial and election fraud, said, “I deeply regret any problems Mr. Schwartz’s filing may have caused.”
He also admitted to a lapse in understanding recent developments in legal technology, specifically the ability of tools like Google Bard to generate plausible but non-existent legal citations.
This isn’t the first time a US lawyer has been caught out by false AI-generated legal research.
Earlier in the year, Steven Schwartz (not related to David Schwartz), a New York lawyer, faced repercussions for using ChatGPT to research bogus legal cases for a client’s legal complaint.
Schwartz and his colleague Peter LoDuca appeared in court to explain their use of AI-generated false cases. They admitted to referencing these fabricated cases in their legal work.
US District Judge Kevin Castel said of the legal brief containing false cases, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
While AI users now should have a firmer grasp of the technology’s ability to produce fake and false information, it’s highly unlikely we’ve heard the last of these kinds of situations.