FEATURE ARTICLE: Judges, Judicial Ethics, and AI
Author: Professor Michael H. Hoeflich, PhD, Editor-in-Chief
Legal Editor: Carrie E. Parker
This article is featured in Volume 6, Number 9 of the Legal Ethics and Malpractice Reporter, published September 30, 2025.
During the past few years, the lamentable habit of generative AI platforms to provide false citations has resulted in their incorporation in documents submitted to the courts and caused controversy, angry judges, humiliated lawyers, and serious questions about the ethical obligations of lawyers using AI. Unfortunately, as the submission of flawed documents to courts continues, judges may include these citations in opinions not knowing they are false. Are there ethical implications for judges who do this?
Increasingly, we are finding cases in which a judge has been entrapped by a false citation presented in a brief or other document before the court. In Shahid v. Esaam, a Georgia case, the wife objected to the judgment based on improper service and the husband’s brief included two fake cases that the trial court relied upon in accepting the husband’s argument. While the appellate court declined to make factual findings about how this occurred, it certainly suggested the husband’s attorney was at fault:
We are troubled by the citation of bogus cases in the trial court’s order. As the reviewing court, we make no findings of fact as to how this impropriety occurred, observing only that the order purports to have been prepared by Husband’s attorney, Diana Lynch. We further note that Lynch had cited the two fictitious cases that made it into the trial court’s order in Husband’s response to the petition to reopen, and she cited additional fake cases both in that Response and in the Appellee’s Brief filed in this Court.
As noted above, the irregularities in these filings suggest that they were drafted using generative AI. In his 2023 Year-End Report on the Federal Judiciary, Chief Justice John Roberts warned that “any use of AI requires caution and humility.”5 Roberts specifically noted that commonly used AI applications can be prone to “hallucinations,” which caused lawyers using those programs to submit briefs with cites to non-existent cases.
Shahid v. Esaam, 376 Ga. App. 145, 146–47, 918 S.E.2d 198 (2025).
Should the Court of Appeals of Georgia also have said something about the judge’s responsibility in reviewing and approving the order? When a judge approves a document submitted to the court that contains hallucinations does responsibility for the error shift or expand? Or does it remain squarely on the drafter?
A related issue for judges is when they decide to use generative AI in their own research rather than upon documents submitted by lawyers to the court. AI platforms do not treat judges differently from how they treat lawyers and are just as prone to produce a hallucinated case in response to a judge’s or judicial clerk’s research as they are when lawyers use them.
Before looking at the ethical issues surrounding judicial use of AI with its concomitant problems, it is important to understand the dangers that hallucinations pose both for litigants and for the legal system as a whole.
Fundamentally, our common law system depends upon precent, the citation of prior cases relevant to the case at hand for certainty and predictability in judicial decision making. Precedent provides the guard rails that guide judges in their analysis and decision making. In fact, the system of precedent is a form of what people now refer to as crowd-sourcing, but is composed of learned and conscientious judges and represents, in some cases, generations of such analysis and decision-making. Indeed, the whole of precedent, in effect, represents the combined wisdom of the law. If a judge is misled by a hallucinated case, this can taint the entire law about not only the particular case but future cases as well. If enough future cases cite hallucinated precedent then the common law ultimately fails and becomes the product not only of an artificial intelligence–but one which may be quite different from what human judges would have said and decided without the taint.
The inclusion of hallucinated cases in judicial decisions also can mean that those decisions or orders produce results that, in fact, do not represent the state of the law, but, rather, the product of a flawed computer algorithm. When one contemplates this becoming common, one has to react with horror. Justice will not be served in such an event and people will lose faith in the legal system as a result.
What are the ethical consequences for a judge who is misled by an AI generated hallucination and uses a false case or cases in the decision-making process? The Model Code of Judicial Conduct can provide a guide. Canon 1 of the Model Code reads:
A judge shall uphold and promote the independence, integrity, and impartiality of the judiciary, and shall avoid impropriety and the appearance of impropriety.
Rule 1.2 states:
A judge shall act at all times in a manner that promotes public confidence in the independence, integrity, and impartiality of the judiciary, and shall avoid impropriety and the appearance of impropriety.
Canon 2 of the Model Code reads:
A judge shall perform the duties of judicial office impartially, competently, and diligently.
Comment 5 reads:
Actual improprieties include violations of law, court rules or provisions of this Code. The test for appearance of impropriety is whether the conduct would create in reasonable minds a perception that the judge violated this Code or engaged in other conduct that reflects adversely on the judge’s honesty, impartiality, temperament, or fitness to serve as a judge.
The focus on competency and fitness—as well as temperament—in these provisions suggest strongly that a judge who includes an hallucinated case in the decision-making process, whether it was submitted by a lawyer or sua sponte in the judge’s own research, will face questions as to how this happened. If the answer is that the judge did not check the citations he used, then he may well face discipline.
Although it is not clear that a specific rule on judicial technical competence is necessary,1 one state has added such a rule to its judicial code. On September 25, 2025, Arizona adopted new Comment 1 to Rule 2.5, which is modeled on Rule of Professional Conduct 1.1, Comment 8. The new Arizona language states:
Competence in the performance of judicial duties requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary to perform a judge’s responsibilities of judicial office, including the use of, and knowledge of the benefits and risks associated with, technology relevant to service as a judicial officer.
This addition will become effective September 1, 2026.
One partial answer to this problem is for every judge to institute a system of monitoring and verifying all citations used in the decision-making process. For judges who have clerks, this seems a reasonable process to require. But many judges in state systems may not have clerks, or their clerks may be already overburdened with other tasks. Introducing systemic and comprehensive monitoring of all authorities cited by lawyers will require increasing court financial and personnel support, which may simply be beyond the ability of court systems to do.
As more judges discover the presence of AI generated hallucinations in documents submitted to them or produced by them or their clerks, the judiciary at every level must begin to formulate solutions to this very serious problem to the integrity of the law and legal system and the job security of judges.
1Michigan and West Virginia authorities have taken the position that technical competence is part of general judicial competence (Michigan Ethics Op. JL-155, Oct. 27, 2023) and West Virginia (Ethics Op. 2023-22, Oct. 13, 2023).
READ THE FULL ISSUE OF LEMR, Vol. 6, No. 9
About Joseph, Hollander & Craft LLC
Joseph, Hollander & Craft is a mid-size law firm representing criminal defense, civil defense, personal injury, and family law clients throughout Kansas and Missouri. From our offices in Kansas City, Lawrence, Overland Park, Topeka and Wichita, our team of 25 attorneys covers a lot of ground, both geographically and professionally.
We defend against life-changing criminal prosecutions. We protect children and property in divorce cases. We pursue relief for clients who have suffered catastrophic injuries or the death of a loved one due to the negligence of others. We fight allegations of professional misconduct against medical and legal practitioners, accountants, real estate agents, and others.
When your business, freedom, property, or career is at stake, you want the attorney standing beside you to be skilled, prepared, and relentless — Ready for Anything, come what may. At JHC, we pride ourselves on offering outstanding legal counsel and representation with the personal attention and professionalism our clients deserve. Learn more about our attorneys and their areas of practice, and locate a JHC office near you.