June 2023 New Authority: Generative AI Mishaps
Author: Dr. Michael H. Hoeflich
It sometimes seems that AI has taken over the practice of law. Certainly, numerous AI programs are being marketed to lawyers and law firms; equally certain, they are creating problems. Perhaps the most serious ethical and practical problem discovered to date is that created by lawyers who use “generative AI.”
George Lawton, a tech journalist, defines “generative AI” as follows:
Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data. The recent buzz around generative AI has been driven by the simplicity of new user interfaces for creating high-quality text, graphics and videos in a matter of seconds…
The rapid advances in so-called large language models (LLMs) — i.e., models with billions or even trillions of parameters — have opened a new era in which generative AI models can write engaging text, paint photorealistic images and even create somewhat entertaining sitcoms on the fly. Moreover, innovations in multimodal AI enable teams to generate content across multiple types of media, including text, graphics and video. This is the basis for tools like Dall-E that automatically create images from a text description or generate text captions from images.
What has been so exciting—and controversial—is the use of generative AI in law practice to produce research memoranda and court documents including briefs. The benefits of generative AI are obvious, as are the extreme ethical risks they pose. Several months ago in the LEMR we pointed out some of these risks and predicted that lawyers who became first adopters of generative AI to produce practice documents might well run afoul of the Rules of Professional Conduct. Indeed, a number of ethics experts cautioned against such use. Unfortunately, lawyers have already faced judicial ire because of this.
A New York attorney, in a case involving Avianca Airlines, used Generative AI to create a brief in the case. Unfortunately, the AI program created a brief using a number of bogus citations—citations that looked real but were, in fact, totally made up. The lawyer, apparently, did not have any idea that this was the case. He made the mistake of submitting the brief without checking the citations. When the judge discovered this was the case, he was outraged and ordered a sanctions hearing for June 8. According to the lawyer’s statements to the court, the legal team on the case used the generative capabilities of ChatGPT to produce the brief. The lawyer stated that:
…the citations… were provided by ChatGPT, which also provided its legal source and assured the reliability of… its content.
In the Opinion and Order On Sanctions issued June 22, 2023, the presiding judge held that the attorneys “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.” He listed the “many harms” that result from citing false authorities:
The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.
The judge also found “bad faith on the part of the individual Respondents based upon acts of conscious avoidance and false and misleading statements to the Court” based on the attorneys’ failure to promptly “come clean” after being questioned about the existence of the cited cases. He ultimately imposed a $5,000 sanction.
It is not at all surprising that other judges around the U.S. have begun to issue orders about the use of generative AI in court filings. And it seems certain that disciplinary cases will begin to appear regarding the use of generative AI and the Rules of Professional Conduct.
How should lawyers handle the use of generative AI? The answer would seem to be: use it with great caution, check local court rules as to its use, monitor legal news and disciplinary cases on the subject, and learn as much about the specific application they propose to use, including its reliability and weaknesses. Failure to be alert to the risks involved in using generative AI in law practice may well lead to both judicial and disciplinary problems no lawyer wants.
About Joseph, Hollander & Craft LLC
Joseph, Hollander & Craft is a mid-size law firm representing criminal defense, civil defense, personal injury, and family law clients throughout Kansas and Missouri. From our offices in Kansas City, Lawrence, Overland Park, Topeka and Wichita, our team of 25 attorneys covers a lot of ground, both geographically and professionally.
We defend against life-changing criminal prosecutions. We protect children and property in divorce cases. We pursue relief for clients who have suffered catastrophic injuries or the death of a loved one due to the negligence of others. We fight allegations of professional misconduct against medical and legal practitioners, accountants, real estate agents, and others.
When your business, freedom, property, or career is at stake, you want the attorney standing beside you to be skilled, prepared, and relentless — Ready for Anything, come what may. At JHC, we pride ourselves on offering outstanding legal counsel and representation with the personal attention and professionalism our clients deserve. Learn more about our attorneys and their areas of practice, and locate a JHC office near you.