The Year Technology Ate the Legal Profession

Published: 31 December 2023 | 8:00AM

Imaged created in Bing Image Creator using generative AI.


December is a month to look back on the year that is ending and reflect upon where we were at the beginning of the year and where we expect to be in the coming year. This year, it seems as though the legal profession has faced a remarkable upheaval, much of it dominated by technology.

Within the past twelve months, developments in artificial intelligence seem to have come at a rapidly accelerating pace, and the legal profession is playing catch up. A year ago, there was very little definitive guidance regarding the use of AI and the ethical consequences thereof. Now, the profession has ABA Resolution 604 plus comments and opinions from state committees and judiciaries. Academic and industry commentators fill hundreds of pages on the subject each week. Indeed, there is now enough good work on the subject that generative AI, which pulls its contents from other sources, can write a decent essay on the ethical issues involved when lawyers use it. To illustrate, OpenAI’s ChatGPT produced the following:

Title: Exploring the Ethical Implications of Lawyers Using Generative AI: A Legal Perspective

Introduction:

The integration of Generative AI (Artificial Intelligence) tools in the legal profession has significantly transformed the landscape of legal practice. These innovative technologies offer lawyers powerful tools for research, document drafting, and analysis. However, the use of AI in the legal field raises intricate ethical considerations, especially concerning professional responsibility, confidentiality, and the quality of legal representation.

Ethical Responsibilities of Lawyers:

Legal ethics encompass a set of principles and rules that guide the conduct of lawyers in serving their clients, upholding justice, and maintaining the integrity of the legal system. Central to these ethical obligations are competence, diligence, confidentiality, loyalty, and the duty to provide zealous representation.

Use of Generative AI in Legal Practice:

Generative AI platforms equipped with Natural Language Processing (NLP) capabilities aid lawyers in various tasks, from legal research to drafting contracts, pleadings, and briefs. These technologies have the potential to enhance efficiency, accuracy, and cost-effectiveness in legal work. They assist in analyzing vast volumes of case law, statutes, and precedents, allowing lawyers to access comprehensive information swiftly.

Ethical Considerations:

  1. Competence and Diligence: Lawyers must maintain the competence required to ethically represent their clients. While AI tools enhance efficiency, over-reliance on such tools without understanding their limitations could compromise the lawyer’s duty to provide competent advice and representation.
  2. Quality of Legal Advice: The reliance on AI-generated content might inadvertently lead to errors or oversights if lawyers do not critically analyze or verify the output. This raises concerns about the quality of legal advice provided to clients.
  3. Confidentiality and Data Security: AI platforms process vast amounts of sensitive data. Lawyers must ensure the confidentiality and security of client information when using AI tools, addressing potential risks of data breaches or unauthorized access.
  4. Transparency and Disclosure: Clients have the right to know if AI technology is used in their legal matters. Lawyers should transparently disclose the use of AI tools, clarifying the role of these technologies in their representation.
  5. Accountability and Responsibility: While AI can assist in decision-making, lawyers ultimately bear responsibility for the advice provided and actions taken in legal matters. They must not absolve themselves of accountability by solely attributing decisions to AI-generated content.

Conclusion:

The integration of Generative AI in legal practice presents both opportunities and challenges for lawyers concerning ethical obligations. Balancing the advantages of AI tools with ethical responsibilities requires a nuanced approach. Lawyers must maintain their professional competence, ensure the quality of legal services, safeguard client confidentiality, and uphold transparency when utilizing AI technology. Striking this balance will foster ethical legal practice while harnessing the benefits of AI innovation in the legal profession.

My conclusion upon reading this AI generated memorandum is that it is not bad. I would give it a “B” if one of my students submitted it.

I asked the same question of Bing Chat and received this answer:

Certainly. The use of generative AI in the legal profession is a topic of growing interest. According to the State Bar of California Standing Committee on Professional Responsibility and Conduct, generative AI is a tool that has wide-ranging application for the practice of law and administrative functions of the legal practice for all licensees, regardless of firm size, and all practice areas. However, like any technology, generative AI must be used in a manner that conforms to a lawyer’s professional responsibility obligations, including those set forth in the Rules of Professional Conduct and the State Bar Act.

The Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law provides a comprehensive framework for lawyers to use generative AI in a manner that conforms to their professional responsibility obligations. The guidance emphasizes that a lawyer should understand the risks and benefits of the technology used in connection with providing legal services. How these obligations apply will depend on a host of factors, including the client, the matter, the practice area, the firm size, and the tools themselves, ranging from free and readily available to custom-built, proprietary formats.1

Generative AI use presents unique challenges; it uses large volumes of data, there are many competing AI models and products, and, even for those who create generative AI products, there is a lack of clarity as to how it works. In addition, generative AI poses the risk of encouraging greater reliance and trust on its outputs because of its purpose to generate responses and its ability to do so in a manner that projects confidence and effectively emulates human responses.1

The guidance also highlights that a lawyer must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections. A lawyer must anonymize client information and avoid entering details that can be used to identify the client. A lawyer or law firm should consult with IT professionals or cybersecurity experts to ensure that any AI system in which a lawyer would input confidential client information adheres to stringent security, confidentiality, and data retention protocols.1

In conclusion, while generative AI can be a valuable tool for lawyers, it must be used in a manner that conforms to a lawyer’s professional responsibility obligations. Lawyers should understand the risks and benefits of the technology used in connection with providing legal services and ensure that they adhere to stringent security, confidentiality, and data retention protocols.1

1Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law

I would give a higher grade to this memorandum. To me, it is a more lawyerly piece of prose and a more nuanced analysis. I would give this piece a “B+.”

Over the past year, lawyers and computer scientists have found that there are two different broad categories of problems with the use of generative AI by lawyers and law firm staff. The first are problems in the software, the most prominent of which is so-called “Hallucinations.”2 Another serious problem in many AI programs is the lamentable tendency for these programs to show the same types of biases that exist in the large datasets from which they source their information.3 There is also a risk of the misuse of generative AI by lawyers and other human beings. These problems are all quite serious and remain unresolved. They have also given rise to attempts to regulate the use of AI by lawyers.

The ABA adopted Resolution 604 at its 2023 midyear meeting. These are a few key passages from the Report:

The ABA urges organizations that design, develop, deploy, and use AI systems and capabilities to follow these guidelines: 

  • Developers, integrators, suppliers, and operators (“developers”) of AI systems and capabilities should ensure that their products, services, systems, and capabilities are subject to human authority, oversight, and control.
  • Responsible individuals and enterprises should be accountable for the consequences caused by their use of AI products, services, systems, and capabilities, including any legally cognizable injury or harm caused by their use, unless they have taken reasonable measures to mitigate against that harm or injury.
  • Accountability and human authority, oversight and control are closely interrelated legal concepts. In the context of AI, they present key concerns, given that AI is increasingly being used in a variety of contexts to make decisions that can significantly impact people’s lives, including evaluating applicants for jobs, determining who receives access to loans, assessing criminal defendants’ likelihood of being a repeat offender in connection with bail proceedings, screening rental applicants, and determining how self- driving cars should navigate through complex traffic and driving situations. 

And:

The ABA House of Delegates passed two Resolutions that address AI. This Resolution builds on and is consistent with those existing ABA policies. 

  • ABA urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law, including (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.
  • ABA urges federal, state, local, territorial and tribal governments to:
    • Ensure due process and refrain from using pretrial risk assessment tools unless the data supporting the risk assessment is transparent, publicly disclosed, and validated; and
    • to recognize that an individual’s criminal history and other criteria may reflect structurally biased application of laws, policies or practices, as well as conscious or unconscious bias.

While Resolution 604 provides high-level guidance to parties involved in the production and use of AI products, it does not provide the kind of specific guidance that lawyers need to use AI in all its forms in their practice. Every lawyer should read Resolution 604, but it will not satisfy his or her current needs, especially concerning the ethical issues involved in using generative AI in practice. For this, lawyers must turn to courts, ethics committees, and commentators.

The most important case on the use of generative AI is Mata v. Avianca Airlines, S.D.N.Y. Case No. 22-cv-1461. One of the lawyers in the case used generative AI to produce the brief. Unfortunately, the AI suffered from “hallucinations” and cited fictional cases. When the court discovered this, the judge was quite appropriately concerned that the lawyer had violated the Rules of Professional Conduct. In the end, the judge imposed sanctions on the lawyers involved, stating:

In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. Rule 11, Fed. R. Civ. P. Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. (the “Levidow Firm”) (collectively, “Respondents”) abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question…

Mata v. Avianca, Inc., F.Supp.3d 2023 WL 4114965 (S.D.N.Y. June 22, 2023).

When the Mata case first came to the attention of the legal profession the general reaction was one of astonishment and fear. Most lawyers did not understand the limits of generative AI or the problem of hallucinations. Suddenly, lawyers realized that AI was something very complex and dangerous.

Since Mata, courts and bar committees have begun to explore possible regulatory limits on lawyers’ use of generative AI both directly and indirectly. Furthermore, a number of developers and providers of AI services claim to have made progress in addressing several systemic problems including hallucinations and bias. However, problems persist.

Judge Brantley Starr of the Northern District of Texas issued an order to all lawyers in his court:

All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being. 

These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations.

Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle.

Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.

The key concept in this order is that lawyers are ethically required to check the accuracy and bias of any generative AI product they employ for certain tasks. Judge Starr recognizes that, while AI programs can be quite useful and powerful tools for some legal tasks, they have not yet achieved an adequate level of performance to be trusted with the most critical tasks that lawyers perform. Judge Starr’s order also includes a model template for lawyers practicing in his court to use:

CERTIFICATE REGARDING JUDGE-SPECIFIC REQUIREMENTS

I, the undersigned attorney, hereby certify that I have read and will comply with all judge-specific requirements for Judge Brantley Starr, United States District Judge for the Northern District of Texas.

I further certify that no portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence—including quotations, citations, paraphrased assertions, and legal analysis—will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the Court. I understand that any attorney who signs any filing in this case will be held responsible for the contents thereof according to applicable rules of attorney discipline, regardless of whether generative artificial intelligence drafted any portion of that filing.

________________
ATTORNEY NAME(S)

Looking back on the legal profession’s reaction to AI in 2023 reminds me of a phrase tech companies like to use: “early adopters.” Early adopters are individuals who want to be the first to have or use a new device or new program. Many are willing to use “beta” versions of software or hardware—in effect, to be guinea pigs for the tech companies to help eliminate “bugs” in the programs or devices under development. Tech companies and software developers clearly benefit from early adopters. However, the legal profession must ask whether (and, if so, how) lawyers should be early adopters of new technologies that have the potential to harm their clients. This is a particularly critical question if lawyers do not tell their clients that they are using such technologies.

The legal profession and the world as a whole are at the beginning of what will most likely be a technological revolution—one that will profoundly affect the way we live and work. However, being at the beginning, we do not yet know enough about how AI will function in the various roles it is likely to play. Until we reach that point and recognize both the intended and unintended consequences of using AI in law practice, we cannot eliminate human supervision of AI products. And we must exercise extreme caution in deciding when to use generative AI.

There can be little doubt that 2024 will bring new developments in artificial intelligence, including products developed specifically for use by lawyers. My advice is that, before adopting AI for law office tasks, especially generative AI or other public facing work such as client services, lawyers should think carefully about the potential impact on their obligations under the Rules of Professional Conduct.

References:

  1. https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf
  2. Shawn Curran, Oliver Bethell & Sam Lansley, “Hallucination is the last thing you need,” Travers Smith Artificial Intelligence.
  3. Luca C.M. Melchionna, “Bias and Fairness in Artificial Intelligence,” NYSBA.org (June 29, 2023).

 

READ THE FULL ISSUE OF LEMR, Vol. 4, No. 12


About Joseph, Hollander & Craft LLC

Joseph, Hollander & Craft is a mid-size law firm representing criminal defense, civil defense, personal injury, and family law clients throughout Kansas and Missouri. From our offices in Kansas City, Lawrence, Overland Park, Topeka and Wichita, our team of 25 attorneys covers a lot of ground, both geographically and professionally.

We defend against life-changing criminal prosecutions. We protect children and property in divorce cases. We pursue relief for clients who have suffered catastrophic injuries or the death of a loved one due to the negligence of others. We fight allegations of professional misconduct against medical and legal practitioners, accountants, real estate agents, and others.

When your business, freedom, property, or career is at stake, you want the attorney standing beside you to be skilled, prepared, and relentless — Ready for Anything, come what may. At JHC, we pride ourselves on offering outstanding legal counsel and representation with the personal attention and professionalism our clients deserve. Learn more about our attorneys and their areas of practice, and locate a JHC office near you.

Our Locations

Kansas City | 816-673-3900

926 Cherry St
Kansas City, MO 64106
VISIT SITE

Lawrence | 785-856-0143

5200 Bob Billings Pkwy, #201
Lawrence, KS 66049
VISIT SITE

Overland Park | 913-948-9490

10104 W 105th St
Overland Park, KS 66212
VISIT SITE

Topeka | 785-234-3272

1508 SW Topeka Blvd
Topeka, KS 66612
VISIT SITE

Wichita | 316-262-9393

500 N Market St
Wichita, KS 67214
VISIT SITE

Contact Joseph, Hollander & Craft LLC

Contact Joseph, Hollander & Craft to discuss how our team of attorneys can help you.

Hidden
This field is for validation purposes and should be left unchanged.