Artificial intelligence is all the rage. Watch the news, skim your social media sites and that may be all you see. AI — and specifically ChatGPT — can create marketing plans, write our kids’ college essays and even compose music. But how well it performs can vary wildly and is very much up for debate.

Recently, a lawyer learned that relying on ChatGPT could have disastrous results.

Background of the Case

On Aug. 27, 2019, Roberto Mata was on an Avianca airline flight from El Salvador to New York’s Kennedy International Airport when he was allegedly injured by a metal serving cart that hit his knee. Mata filed suit against the airline in February 2022.

Several months later, Avianca requested that P. Kevin Castel, a Manhattan, New York, federal judge, dismiss the case since the statute of limitations expired. Mata’s attorneys responded in April 2023 by providing a 10-page brief that cited a number of court cases relevant to the lawsuit and the statute of limitations issue. The problem? The airline’s attorneys were unable to verify any of these cases and neither could Judge Castel. The court cases in the brief were fictitious.

How It Happened

It seems that the attorney had used ChatGPT when he performed his legal research. The AI program produced the following court cases, including courts, docket numbers and dates:

  • Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019).
  • Shaboon v. Egyptair, 2013 IL App (1st) 111279-U (Il App. Ct. 2013).
  • Petersen v. Iran Air, 905 F. Supp 2d 121 (D.D.C. 2012).
  • Martinez v. Delta Airlines, Inc, 2019 WL 4639462 (Tex. App. Sept. 25, 2019).
  • Estate of Durden v. KLM Royal Dutch Airlines, 2017 WL 2418825 (Ga. Ct. App. June 5, 2017).
  • Miller v. United Airlines, Inc, 174 F.3d 366 (2d Cir. 1999).

But none of them are real.

After being caught, he explained he never meant to deceive anyone and threw himself on the mercy of the court. According to his affidavit from May 24, 2023, he claimed that he had not used ChatGPT before and did not realize it could produce inaccurate information.

However, it is interesting to note that on ChatGPT’s website, under the category of “Limitations,” the platform admits to the following:

  • May occasionally generate incorrect information.
  • May occasionally produce harmful instructions or biased content.
  • Has limited knowledge of world and events after 2021.

You might think that a lawyer would pause before proceeding.

The lawyer told the judge he used ChatGPT to supplement his research, but he regrets having done so. However, he tried to be careful and even asked the AI platform if all the cases were real, and ChatGPT assured him they were.

The lawyers for Avianca suspected that the cases were bogus since they did not recognize any of them and they are familiar with airline-related court decisions. Judge Castel also did some digging. He contacted the 11th Circuit clerk, who confirmed the Varghese docket number was from a completely different case.

How the Judge Ruled

On June 22, Judge Castel imposed $5,000 fines on two attorneys from the firm that used ChatGPT. Although a nominal penalty, the fines are intended as a warning to other lawyers about using AI for legal purposes.

“Many harms flow from the submission of fake opinions,” Judge Castel explained. “The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct.”

In addition, he cautioned that such careless practice “promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

How ChatGPT Works

ChatGPT — along with Google’s Bard and Microsoft’s Bing — can have conversations that seem human. It can write code, draft emails and even help plan parties. It works by scanning billions of pages of content from the internet, including blog posts, news stories, tweets and public libraries. After ingesting all this information, it produces responses based on guesses of what text fragments should follow one another.

In this legal case, ChatGPT likely consumed enough information to create a brief but riddled it with a fascinating collection of facts and names from myriad other cases.

Where AI Might Lead

For some time now, the construction industry has been discussing the hopes and the hazards that software like ChatGPT can offer. So, this case is a cautionary tale. You cannot merely do a search and then paste the output into an important document. Even before AI, believing information on the internet required a certain amount of discernment. And there is an increasing worry that the rapid growth of IA could be used to spread propaganda and other misinformation.

We clearly cannot rely on ChatGPT to do our thinking for us. On May 30, 2023, a group of industry leaders issued a one-sentence statement warning of the dangers of AI: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Released by the Center for AI Safety, the statement was signed by more than 350 AI executives, engineers and scientists — the people who are building the technology in question.

Does AI bring global risks that may impact all of us? Could be.

For now, lawyers and other professionals should use caution when relying on AI. Although it can be a helpful tool in some situations, it clearly has its risks and limitations.

The information contained in this article is for general educational information only. This information does not constitute legal advice, is not intended to constitute legal advice nor should it be relied upon as legal advice for your specific factual pattern or situation.