0%

Judicial AI Problems

Published: 28 November 2025

                    digital rendering of the scales of justice on dark background surrounded by digital tech looking shapes and symbols

FEATURE ARTICLE: Judicial AI Problems

Author: Professor Michael H. Hoeflich, PhD, Editor-in-Chief

Legal Editor: Carrie E. Parker

This article is featured in Volume 6, Number 11 of the Legal Ethics and Malpractice Reporter, published November 28, 2025.


For the past several years, the legal profession has become increasingly aware of the problems and ethical dangers involved in the use of artificial intelligence in law practice, especially generative AI. Particularly troublesome has been that AI platforms frequently return answers to prompts that are either false (hallucinations) or incorrect. Reports of close to 500 such incidents have been published, and judges have begun to sanction lawyers who do not prevent these errors before documents are filed with their courts.

In the past year, a related problem has sprung up. It is not only lawyers who are using generative AI. It is also judges. And that is a great problem, perhaps greater than its misuse by some lawyers.

When a lawyer submits a flawed document because of AI, it is not authoritative. Indeed, our adversary system is designed to minimize bad law from getting into the “stream of precedent” that will possibly shape the law afterwards in negative ways. When a lawyer prepares a brief, he does so knowing that it will be read by opposing counsel and by the judge and, in many cases, judicial clerks. But, when a judge writes an opinion, many of these safeguards are lacking. Further, a judicial decision will have far more serious consequences for the litigants and for the stream of the law itself. If counsel discovers the incorrect citations, then they must take further action, which will cost the litigant additional money. If it is not discovered immediately, it may be used as precedent in other cases, possibly seriously distorting the law and the legal system as a whole.

Because of the very great danger posed by judicial use of AI and the insertion of hallucinations or incorrect citations into the law, various states and private organizations have begun to issue guidelines for judicial use of AI. On October 10, 2025, the New York State Unified Court system announced that it was issuing an official policy for judges—the New York State Unified Court System Interim Policy on the Use of Artificial Intelligence.

After an introduction discussing the nature of AI, the policy outlines the dangers:

1.) Inaccurate or Fabricated Information

As noted above, the output produced by generative AI tools will sometimes contain hallucinations. Accordingly, the content generated by an AI program should not be used without careful editing. It is the responsibility of every user to thoroughly review such content and to independently confirm that it contains no fabricated or fictitious material.

In view of their limitations, generative AI tools should not be relied upon to provide accurate information or to draft communications about sensitive topics. Moreover, general-purpose AI programs (whether operating on a public model or on a private model) are not suitable for legal writing and legal research, as they may produce incorrect or fabricated citations and analysis. Even when using the AI-enhanced features that have been incorporated into established legal research platforms, any content generated by AI should be independently verified for accuracy.

2.) Bias and Other Inappropriate Output

The vast datasets on which generative AI systems are trained include material that reflects cultural, economic, and social biases and expressions of prejudice against protected classes of people. As a result, the content generated may promote stereotypes, reinforce prejudices, exhibit unfair biases, or contain otherwise undesirable, offensive, or harmful material. Accordingly, it is the responsibility of every user to thoroughly review any AI-generated content, to ensure that it does not reflect any unfair bias, stereotypes, or prejudice or contain any other inappropriate material, and to make any necessary revisions.

3.) Vulnerability of Confidential Information

Many publicly available generative AI platforms (ChatGPT, for example) operate on an open training model, which means, among other things, that the input receive from user prompts is collected and used as further training material for their LLMs. Since the LLM can reproduce that material for anyone using an AI program connected to it, that input is potentially accessible by the public at large. Accordingly, once a UCS user inputs information into such a platform as part of a prompt or in an uploaded document, that information is no longer under UCS control, and may become publicly available.

In contrast to AI platforms that operate on these public models, which can be accessed by anyone and may store data for use in future training, some AI platforms operate on a private model. Platforms using private models are hosted or managed by an organization, and their use is typically restricted to members of that organization or individuals who have been granted access. They may be tailored to the organization’s specific needs, and they include additional security, compliance, and privacy measures.

Furthermore, users should be careful to avoid uploading copyrighted content into a generative AI program.

A number of the points made in this section of the document are extremely important. First, it is critical that judges—and lawyers understand the difference between “private” and “public” platforms. Second, the document draws attention to the fact that, because of the way current AI platforms acquire information, they are subject to the biases in the data from which they learn and compose. Third, the document draws attention to the dangers of uploading intellectual property and the corresponding legal consequences of doing so.

The actual policy is short and clear:

  1. UCS users may use only those generative AI products that have been approved by the UCS Division of Technology and Court Research (DoTCR), which are identified in the attached Appendix.
  2. All judges and nonjudicial UCS employees with computer access shall be required to complete an initial training course, as well as continuing training, in the use of AI technology. No generative AI product may be used on any UCS-owned device or for any UCS-related work until the user has completed the initial training course.
  3. No user may input into any generative AI program that does not operate on a private model —by writing a prompt, uploading a document or file, or otherwise — any information that is confidential, private, or privileged, or includes personally identifiable information or protected health information, or is otherwise inappropriate for public release. A private model is a model that is under UCS control and does not share data with any public LLM.
  4. No user may upload into any generative AI program that does not operate on a private model any document that has been filed or submitted for filing in any court, even if the document is classified as public.
  5. Any user who uses a generative AI program to produce a document or any other content must thoroughly review the content produced by the program and make necessary revisions to ensure that it is accurate and appropriate, and does not reflect any unfair bias, stereotypes, or prejudice.
  6. No user may install on a UCS-owned device any software that is required for the use of a generative AI program, or use a UCS-owned device to access any such program that requires payment, a subscription, or agreement to terms of use, unless access to that program has been provided to the user by the UCS.
  7. AI tools may not be used on a UCS-owned device for personal purposes unrelated to UCS work.
  8. The approval of a generative AI product by the DoTCR signifies that the product is safe to use from a technological standpoint, but does not necessarily mean that, for a particular task, the use of that product is suitable or appropriate. Such approval by the DoTCR does not preclude any judge or UCS supervisor from prohibiting the use of such a product for a particular task by a person under their supervision.

The policy applies to all UCS judges, justices, and nonjudicial employees, and operates essentially everywhere a UCS-owned device is being used or UCS-related work is being performed on any device.

New York’s Interim Policy is sensible, and every state supreme court should take a serious look at it and formulate its own policy on this critical subject. Given the very real dangers in unregulated judicial use of AI in researching and drafting opinions, it seems necessary that every state adopt some set of rules that will minimize the dangers.

READ THE FULL ISSUE OF LEMR, Vol. 6, No. 11


About Joseph, Hollander & Craft LLC

Joseph, Hollander & Craft is a mid-size law firm representing criminal defense, civil defense, personal injury, and family law clients throughout Kansas and Missouri. From our offices in Kansas City, Lawrence, Overland Park, Topeka and Wichita, our team of 25 attorneys covers a lot of ground, both geographically and professionally.

We defend against life-changing criminal prosecutions. We protect children and property in divorce cases. We pursue relief for clients who have suffered catastrophic injuries or the death of a loved one due to the negligence of others. We fight allegations of professional misconduct against medical and legal practitioners, accountants, real estate agents, and others.

When your business, freedom, property, or career is at stake, you want the attorney standing beside you to be skilled, prepared, and relentless — Ready for Anything, come what may. At JHC, we pride ourselves on offering outstanding legal counsel and representation with the personal attention and professionalism our clients deserve. Learn more about our attorneys and their areas of practice, and locate a JHC office near you.

Share

Our Locations

Kansas City | 816-673-3900

926 Cherry St
Kansas City, MO 64106
VISIT SITE

Lawrence | 785-856-0143

5200 Bob Billings Pkwy, #201
Lawrence, KS 66049
VISIT SITE

Overland Park | 913-948-9490

10104 W 105th St
Overland Park, KS 66212
VISIT SITE

Topeka | 785-234-3272

1508 SW Topeka Blvd
Topeka, KS 66612
VISIT SITE

Wichita | 316-262-9393

500 N Market St
Wichita, KS 67214
VISIT SITE

Contact Joseph, Hollander & Craft LLC

Contact Joseph, Hollander & Craft to discuss how our team of attorneys can help you.

Hidden
This field is for validation purposes and should be left unchanged.