Competence and the Lawyer’s Use of GenAI: ABA formal opinion 512

Published: 31 August 2024

                    digital rendering of the scales of justice on dark background surrounded by digital tech looking shapes and symbols

FEATURE ARTICLE: Competence and the Lawyer’s Use of GenAI: ABA formal opinion 512

Author: Professor Michael H. Hoeflich, PhD, Editor-in-Chief

Legal Editor: Carrie E. Parker; Design & Publishing Editor: Matthew T. Stephens

This article is featured in Volume 5, Number 8 of the Legal Ethics and Malpractice Reporter, published August 31, 2024.


The introduction of AI and its perceived advantages to lawyers has revolutionized the law profession over the past two years. At the same time, many commentators have sounded notes of caution about the practical and ethical problems that may befall them. Now the American Bar Association has finally provided some guidance. Several weeks ago, the ABA issued Formal Opinion 512, which explores a number of important points for lawyers using or wishing to use AI—especially generative AI—in their practices. Although the opinion focuses on generative AI, the committee notes that many of the general ethical issues that arise with the use of generative AI are fundamentally the same as those that arise with its use for other purposes.

The opinion is arranged by ethical duties and how they affect a lawyer’s use of generative AI. Generative AI is technically a large language model (LLM) with access to an immense amount of data, much of which it skims from the web. The algorithms permit the AI to provide information to the user and even generate documents. A number of AI tech companies now tailor their programs for the legal profession and search specialized databases as well.

While the opinion provides guidance on the application of ethical rules, many bits of its advice come as no surprise. Nevertheless, it is worth looking at each rule and its application. On the issue of competence, the opinion states what I think most lawyers will consider a reasonable guide:

To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations. This is not a static undertaking. Given the fast-paced evolution of GAI tools, technological competence presupposes that lawyers remain vigilant about the tools’ benefits and risks.[9] Although there is no single right way to keep up with GAI developments, lawyers should consider reading about GAI tools targeted at the legal profession, attending relevant continuing legal education programs, and, as noted above, consulting others who are proficient in GAI technology.

Because GAI tools are subject to mistakes, lawyers’ uncritical reliance on content created by a GAI tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties. Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation as required by Model Rule 1.1.

The appropriate amount of independent verification or review required to satisfy Rule 1.1 will necessarily depend on the GAI tool and the specific task that it performs as part of the lawyer’s representation of a client. For example, if a lawyer relies on a GAI tool to review and summarize numerous, lengthy contracts, the lawyer would not necessarily have to manually review the entire set of documents to verify the results if the lawyer had previously tested the accuracy of the tool on a smaller subset of documents by manually reviewing those documents, comparing then to the summaries produced by the tool, and finding the summaries accurate. Moreover, a lawyer’s use of a GAI tool designed specifically for the practice of law or to perform a discrete legal task, such as generating ideas, may require less independent verification or review, particularly where a lawyer’s prior experience with the GAI tool provides a reasonable basis for relying on its results.

There are several crucial points made in the above passages. First, the use of AI does not replace the need for human involvement. Of course, one could argue that humans are also prone to make errors and therefore this should not disqualify using and AI on its own. However, the rules require that humans be competent and, if necessary, supervised; so should AI.

The proposal notes, importantly, that lawyers can dependably use a particular generative AI if its accuracy has already been verified by a test run on a smaller problem with a smaller data set. It would seem to suggest that less caution may be required when using an already tested and verified AI than when using an AI for the first time. However, I think lawyers should approach this issue with caution until we have more information as to what would be an acceptable test run and acceptable verification. An inappropriate degree of independent verification or review of its output could violate the duty to provide competent representation as required by Model Rule 1.1. While GAI tools may be able to significantly assist lawyers in serving clients, they cannot replace the judgment and experience necessary for lawyers to competently advise clients about their legal matters or to craft the legal documents or arguments required to carry out representations.

The appropriate amount of independent verification or review required to satisfy Rule 1.1 will necessarily depend on the GAI tool and the specific task that it performs as part of the lawyer’s representation of a client. For example, if a lawyer relies on a GAI tool to review and summarize numerous, lengthy contracts, the lawyer would not necessarily have to manually review the entire set of documents to verify the results if the lawyer had previously tested the accuracy of the tool on a smaller subset of documents by manually reviewing those documents, comparing them to the summaries produced by the tool, and finding the summaries accurate. Moreover, a lawyer’s use of a GAI tool designed specifically for the practice of law or to perform a discrete legal task, such as generating ideas, may require less independent verification or review, particularly where a lawyer’s prior experience with the GAI tool provides a reasonable basis for relying on its results.

On the requirement that lawyers maintain confidentiality of client materials and information pursuant to Rule 1.6 and related provisions, the opinion is also quite interesting. There are a few confidentiality issues that arise specifically with generative AI programs:

Self-learning GAI tools into which lawyers input information relating to the representation, by their very nature, raise the risk that information relating to one client’s representation may be disclosed improperly, even if the tool is used exclusively by lawyers at the same firm. This can occur when information relating to one client’s representation is input into the tool, then later revealed in response to prompts by lawyers working on other matters, who then share that output with other clients, file it with the court, or otherwise disclose it. In other words, the self-learning GAI tool may disclose information relating to the representation to persons outside the firm who are using the same GAI tool. Similarly, it may disclose information relating to the representation to persons in the firm (1) who either are prohibited from access to said information because of an ethical wall or (2) who could inadvertently use the information from one client to help another client, not understanding that the lawyer is revealing client confidences. Accordingly, because many of today’s self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client’s informed consent is required prior to inputting information relating to the representation into such a GAI tool.

The problem is that informed consent requires more than ordinary consent. The client must understand to what he is consenting, and it is not necessarily clear that most lawyers will in fact be able to explain how the AI program they’re using works:

When consent is required, it must be informed. For the consent to be informed, the client must have the lawyer’s best judgment about why the GAI tool is being used, the extent of and specific information about the risk, including particulars about the kinds of client information that will be disclosed, the ways in which others might use the information against the client’s interests, and a clear explanation of the GAI tool’s benefits to the representation. Part of informed consent requires the lawyer to explain the extent of the risk that later users or beneficiaries of the GAI tool will have access to information relating to the representation. To obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient…[emphasis added].

As a baseline, all lawyers should read and understand the Terms of Use, privacy policy, and related contractual terms and policies of any GAI tool they use to learn who has access to the information that the lawyer inputs into the tool or consult with a colleague or external expert who has read and analyzed those terms and policies. Lawyers may need to consult with IT professionals or cyber security experts to fully understand these terms and policies as well as the manner in which GAI tools utilize information.

This section of the opinion seems to establish a rather heavy burden for lawyers to bear each time they wish to use an AI program in client representation. The idea that lawyers using AI tools provided by third parties must either know enough to provide sufficient information to clients or consult experts who undoubtedly will charge for such consulting seems a bit unrealistic. The average lawyer is not going to be able to do the sort of investigation of an AI program and its provider that a court could interpret is required by this section of the opinion. This seems especially true for solo practitioners and lawyers in small firms. Certainly, the opinions suggestions make sense in theory, but the cost of carrying them out in both time and money could in fact cause many lawyers not to use AI. This would seem to cause a rule 1.1 problem. Perhaps, if lawyers use the same AI in the same way from the same provider, they may be able to get statements about the AI that can be used with multiple clients. Nevertheless, one can easily speculate that there will be litigation at some point as to whether the consent was in fact informed consent and this is a risk for lawyers.

Related to the confidentiality issues are issues of communication. Rule 1.4 requires that lawyers provide information to their clients in a timely manner that is necessary for the clients to make decisions about the representation. This must raise the question of whether a client needs to be informed when a lawyer intends to use AI in a representation:

…lawyers must disclose their GAI practices if asked by a client how they conducted their work, or whether GAI technologies were employed in doing so, or if the client expressly requires disclosure under the terms of the engagement agreement or the client’s outside counsel guidelines. There are also situations where Model Rule 1.4 requires lawyers to discuss their use of GAI tools unprompted by the client.

Some of the situations in which the opinion suggests that a client will have to be informed about the use of AI may not be obvious to the average lawyer. For instance, the opinion includes cases in which lawyers use AI for jury selection.

The opinion’s statements about the application of Rule 3.3 and Rule 8.4 address a problem that has already become apparent in court proceedings: the use of AI documents that are inaccurate. The Avianca case that has received so much publicity and legal circles is one such. This problem of AI inaccuracy and falsifications raise the related issues of adequate supervision required under Rules 5.1 and 5.3:

Managerial lawyers must establish clear policies regarding the law firm’s permissible use of GAI, and supervisory lawyers must make reasonable efforts to ensure that the firm’s lawyers and nonlawyers comply with their professional obligations when using GAI tools. Supervisory obligations also include ensuring that subordinate lawyers and nonlawyers are trained, including in the ethical and practical use of the GAI tools.

And:

Model Rule 5.3(b) imposes a duty on lawyers with direct supervisory authority over a nonlawyer to make “reasonable efforts to ensure that” the nonlawyer’s conduct conforms with the professional obligations of the lawyer. Earlier opinions recognize that when outsourcing legal and nonlegal services to third-party providers, lawyers must ensure, for example, that the third party will do the work capably and protect the confidentiality of information relating to the representation.

The opinion goes on to suggest looking to earlier authority (e.g., Formal Opinion 08-451) on such matters as cloud computing. From these earlier rulings and other authority, the opinion sets out a number of bullet points for lawyers to consider which sum up these earlier authorities. Every lawyer using a third party AI provider should study these points carefully.

The final significant issue raised by Formal Opinion 512 is that of how to charge clients when using AI. The opinion applies rule 1.5 and Formal Opinion 93-379:

GAI tools may provide lawyers with a faster and more efficient way to render legal services to their clients, but lawyers who bill clients an hourly rate for time spent on a matter must bill for their actual time. ABA Formal Ethics Opinion 93-379 explained, “the lawyer who has agreed to bill on the basis of hours expended does not fulfill her ethical duty if she bills the client for more time than she has actually expended on the client’s behalf.” …“If a lawyer has agreed to charge the client on [an hourly] basis and it turns out that the lawyer is particularly efficient in accomplishing a given result, it nonetheless will not be permissible to charge the client for more hours than were actually expended on the matter,” because “[t]he client should only be charged a reasonable fee for the legal services performed.”

The factors set forth in Rule 1.5(a) also apply when evaluating the reasonableness of charges for GAI tools when the lawyer and client agree on a flat or contingent fee. For example, if using a GAI tool enables a lawyer to complete tasks much more quickly than without the tool, it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the GAI tool as when not using it. “A fee charged for which little or no work was performed is an unreasonable fee.”

The principles set forth in ABA Formal Opinion 93-379 also apply when a lawyer charges GAI work as an expense. Rule 1.5(a) requires that disbursements, out-of-pocket expenses, or additional charges be reasonable…

In applying the principles set out in ABA Formal Ethics Opinion 93-379 to a lawyer’s use of a GAI tool, lawyers should analyze the characteristics and uses of each GAI tool, because the types, uses, and cost of GAI tools and services vary significantly. To the extent a particular tool or service functions similarly to equipping and maintaining a legal practice, a lawyer should consider its cost to be overhead and not charge the client for its cost absent a contrary disclosure to the client in advance… In contrast, when a lawyer uses a third-party provider’s GAI service to review thousands of voluminous contracts for a particular client and the provider charges the lawyer for using the tool on a per-use basis, it would ordinarily be reasonable for the lawyer to bill the client as an expense for the actual out-of-pocket expense incurred for using that tool…

… perhaps the most difficult issue is determining how to charge clients for providing in-house services that are not required to be included in general office overhead and for which the lawyer seeks reimbursement … Absent an advance agreement, the lawyer “is obliged to charge the client no more than the direct cost associated with the service on the photocopy machine) plus a reasonable allocation of overhead expenses directly associated with the provision of the service.

Formal opinion 512 is full of extremely useful information. The above summary and analysis gives only a partial view of the points made. Any lawyer using or contemplating using generative AI or other forms of AI must read the opinion and consider its suggestions. The use of generative AI in legal practice is evolving and the risks associated with that use are only just becoming apparent.

READ THE FULL ISSUE OF LEMR, Vol. 5, No. 8


About Joseph, Hollander & Craft LLC

Joseph, Hollander & Craft is a mid-size law firm representing criminal defense, civil defense, personal injury, and family law clients throughout Kansas and Missouri. From our offices in Kansas City, Lawrence, Overland Park, Topeka and Wichita, our team of 25 attorneys covers a lot of ground, both geographically and professionally.

We defend against life-changing criminal prosecutions. We protect children and property in divorce cases. We pursue relief for clients who have suffered catastrophic injuries or the death of a loved one due to the negligence of others. We fight allegations of professional misconduct against medical and legal practitioners, accountants, real estate agents, and others.

When your business, freedom, property, or career is at stake, you want the attorney standing beside you to be skilled, prepared, and relentless — Ready for Anything, come what may. At JHC, we pride ourselves on offering outstanding legal counsel and representation with the personal attention and professionalism our clients deserve. Learn more about our attorneys and their areas of practice, and locate a JHC office near you.

Our Locations

Kansas City | 816-673-3900

926 Cherry St
Kansas City, MO 64106
VISIT SITE

Lawrence | 785-856-0143

5200 Bob Billings Pkwy, #201
Lawrence, KS 66049
VISIT SITE

Overland Park | 913-948-9490

10104 W 105th St
Overland Park, KS 66212
VISIT SITE

Topeka | 785-234-3272

1508 SW Topeka Blvd
Topeka, KS 66612
VISIT SITE

Wichita | 316-262-9393

500 N Market St
Wichita, KS 67214
VISIT SITE

Contact Joseph, Hollander & Craft LLC

Contact Joseph, Hollander & Craft to discuss how our team of attorneys can help you.

Hidden
This field is for validation purposes and should be left unchanged.