tm logo

Attorneys "Truly Mortified" as They Face Sanctions Over ChatGPT's "Hallucination"

Amrusha Chati

Amrusha Chati

15 July 20234 min read

share this blog

chatgpt vs attorneys

 

Reality is sometimes stranger than fiction. Like when an attorney lands in trouble for the "hallucination" of an artificial intelligence-powered chatbot.

Attorney Steven Schwartz found himself in this situation when a court filing he wrote was found to have "bogus" bogus" citations. The reason? ChatGPT wrote the filing, and it made up the citations.

This has left the attorneys "truly mortified" as they face sanctions over ChatGPT's hallucination.

Here's a breakdown of the case that's making some bizarre legal history.

What is the ChatGPT hallucination case about?

In February 2022, Roberto Mata sued the Colombian airline Avianca for an alleged injury. He claimed a metal beverage cart hit him during a flight to New York in 2019.

Avianca's lawyers asked for a dismissal, citing the statute of limitations. In response, Mata's lawyer, Steven A. Schwartz, of the law firm Levidow, Levidow & Oberman, submitted a brief to the Manhattan Federal Court arguing for the case to be continued.

At first glance, the 10-page brief seemed thorough. It cited more than half a dozen legal precedents supporting their case, such as Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines.

But opposing counsel Condon & Forsyth LLP raised concerns since they're familiar with cases involving aviation mishaps. So the court looked into these cases for reference and found… nothing.

That's because ChatGPT had made them all up.

In a case of AI "hallucination," ChatGPT invented detailed information about cases that didn't exist.

How did the mix-up happen?

In a show cause order, U.S. District Judge P. Kevin Castel called this "an unprecedented circumstance." He summoned attorneys Steven Schwartz and Peter LoDuca to a questioning session on 8 June.

An angry Judge Castel criticized the fictitious legal research and "bogus" legal citations that cited legal “gibberish.”

Steven A. Schwartz has been a practicing lawyer for over 30 years but isn't admitted to practice in the Southern District of New York. But because he had filed the original lawsuit before it was moved, he continued working on it. His colleague at the firm, Peter LoDuca, was the attorney of record on the case and is also being held liable.

In an affidavit, Schwartz attempts to defend his role in this. His statement reads:

“As the use of generative artificial intelligence has evolved within law firms, your affiant consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed.”

He says that he did try to verify the information. But he opted to do so by enquiring with ChatGPT again.

“The citations and opinions in question were provided by ChatGPT, which also provided its legal source and assured the reliability of its content.”

Facing sanctions for deceiving the court, Schwartz has pleaded ignorance and not bad faith. He claims this was the first instance of him doing legal research using the AI and “therefore was unaware of the possibility that its content could be false.”

ChatGPT has dazzled everyone with its human-like responses and sophisticated technology. And everyone has been trying to leverage it for their professions. But especially in cases like these involving legal and court technology, ChatGPT and other AI cannot be trusted blindly.

A Cautionary Tale

A hallucinating piece of tech, bogus case law, and two apologetic lawyers. It seems almost comical on the surface, but it raises very serious concerns.

Every legal case invented by ChatGPT included details like bogus judicial decisions and quotes. But the past court cases and legal precedents were not double-checked until they landed in federal courts.

Ironically, this unusual occurrence will serve as a precedent. We need to treat new disruptions, particularly new technology, with caution. Instead of making legal research obsolete, this has served as a reminder to do due diligence while trying to assimilate new technology into processes.

This reflects in the apologetic lawyers responding in their statement that Schwartz:

“greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

AI technology is a new, emerging, and exciting field. But it has its pitfalls. It has raised many questions about intellectual property rights, privacy, and other societal scale risks.

As promising AI technologies evolve, so do their limits and scope. And we'll all have to learn to embrace the good parts while putting in place checks and balances to avoid mishaps.


share this blog

Amrusha is a versatile professional with over 12 years of experience in journalism, broadcast news production, and media consulting. Her impressive career includes collaborating extensively with prominent global enterprises. She garnered recognition for her exceptional work in producing acclaimed shows for Bloomberg, a renowned business news network. Notably, these shows have been incorporated into the esteemed curriculum of Harvard Business School. Amrusha's expertise also encompassed a 4-year tenure as a consultant at Omidyar Network, a leading global impact investing firm. In addition, she played a pivotal role in the launch and content strategy management of the startup Live History India.