Feature Article: The Chatbots are Coming!

Published: 31 March 2023 | 9:21PM

                    chatbot OpenAI ChatGPT

Feature Article: The Chatbots are Coming!

Author: Dr. Michael H. Hoeflich

This article is featured in Volume 4, Number 3 of the Legal Ethics and Malpractice Reporter.


Actually, the chatbots are here and ubiquitous, including in law practice. But as these artificial intelligence (AI) programs become a part of everyday life, they raise a host of ethical questions that need to be addressed.

According to computer technology corporation Oracle:

At the most basic level, a chatbot is a computer program that simulates and processes human conversation (either written or spoken), allowing humans to interact with digital devices as if they were communicating with a real person. Chatbots can be as simple as rudimentary programs that answer a simple query with a single-line response, or as sophisticated as digital assistants that learn and evolve to deliver increasing levels of personalization as they gather and process information.

. . . 

Driven by AI, automated rules, natural-language processing (NLP), and machine learning (ML), chatbots process data to deliver responses to requests of all kinds.

There are two main types of chatbots.

  • Task-oriented (declarative) chatbots are single-purpose programs that focus on performing one function. Using rules, NLP, and very little ML, they generate automated but conversational responses to user inquiries. Interactions with these chatbots are highly specific and structured and are most applicable to support and service functions—think robust, interactive FAQs. Task-oriented chatbots can handle common questions, such as queries about hours of business or simple transactions that don’t involve a variety of variables. Though they do use NLP so end users can experience them in a conversational way, their capabilities are fairly basic. These are currently the most commonly used chatbots.
  • Data-driven and predictive (conversational) chatbots are often referred to as virtual assistants or digital assistants, and they are much more sophisticated, interactive, and personalized than task-oriented chatbots. These chatbots are contextually aware and leverage natural-language understanding (NLU), NLP, and ML to learn as they go. They apply predictive intelligence and analytics to enable personalization based on user profiles and past user behavior. Digital assistants can learn a user’s preferences over time, provide recommendations, and even anticipate needs. In addition to monitoring data and intent, they can initiate conversations. Apple’s Siri and Amazon’s Alexa are examples of consumer-oriented, data-driven, predictive chatbots.

Advanced digital assistants are also able to connect several single-purpose chatbots under one umbrella, pull disparate information from each of them, and then combine this information to perform a task while still maintaining context—so the chatbot doesn’t become “confused.”

Most recently, ChatGPT and ChatGPT4 by the tech company OpenAI have come to the forefront. PC Magazine’s online encyclopedia offers the following description:

(CHAT Generative Pretrained Transformer) An AI-powered chatbot from the OpenAI research company that simulates a human speaking English and other languages. ChatGPT will generate a response when asked open-ended questions about any topic. It is also used for writing program code, composing music, answering test questions, and generating short essays and articles. See OpenAI.

Introduced in late 2022, within a couple days more than a million people were using it. A few weeks later, the website was so popular that users had to keep reloading the site to gain access.

ChatGPT was derived from OpenAI’s GPT-3 natural language system. The successor to InstructGPT, ChatGPT has also been used by cybercriminals to build hacking tools and strains of malware.

Known as a “large language model” (LLM), ChatGPT gained its knowledge base from hundreds of millions of websites, blogs and social media posts. However, what sets ChatGPT apart is that throughout its training process, developers were very involved in adjusting results to make them more accurate. Although said to be a milestone in AI because it generates amazing results, ChatGPT has also been accused of delivering “coherent nonsense.”

There is an incredible number of possible uses for chatbots in law practice—especially for the newest versions like ChatGPT. A chatbot can be used to interface with the public online or on a telephone. A chatbot can do sophisticated legal research. A chatbot can even produce legal documents. Since these new programs have all the information available on the internet to use, their capabilities are astonishing. However, they also can be quite problematic for lawyers from the ethical standpoint.

In the online blog, Bigger Law Firm, Kerry Spencer has identified a number of ethical issues regarding chatbots. Among the most significant are:

  • The duties of diligence and competence 
  • Duties of supervision – Attorneys working with others on a case have a duty to ensure that work is done competently. This means not solely relying on AI for an accurate outcome. It also means supervising the use of AI.
  • Duty of knowing where to draw the line 
  • Duty of knowing how far to use such technology 
  • Duty to ensure client privilege and confidentiality 
  • Unauthorized practice of law 

While there has been quite a good deal of commentary about AI in law practice over the past few years, the chatbots now available are still sufficiently new that there is very little authority as to the ethical questions they raise. Nevertheless, the Rules of Professional Conduct do give us some direction.

When using chatbots as a public interface, lawyers should be mindful of KRPC Rule 1.4 which states:

(a) A lawyer shall keep a client reasonably informed about the status of a matter and promptly comply with reasonable requests for information. 

(b) A lawyer shall explain a matter to the extent reasonably necessary to permit the client to make informed decisions regarding the representation.

MRPC 4-1.4: states:

(a) A lawyer shall:

(1) keep the client reasonably informed about the status of the matter;

(2) promptly comply with reasonable requests for information; and

(3) consult with the client about any relevant limitation on the lawyer’s conduct when the lawyer knows the client expects assistance not permitted by the Rules of Professional Conduct or other law.

(b) A lawyer shall explain a matter to the extent reasonably necessary to permit the client to make informed decisions regarding the representation.

It seems fairly obvious that a lawyer who is using a chatbot — particularly if the chatbot is not being supervised by a human being — must let clients and the public know that they may not be talking to an actual lawyer or, indeed, a human being. The use of an unsolicited chatbot also presents the possibility that a potential client may think that they are talking to a lawyer and that a lawyer-client relationship has been formed. Since chatbots are far from perfect and fabricate facts to fit a narrative that they are developing, this could lead to a Rule 1.1 competency problem; a Rule 5.3 supervisory problem if the chatbot is deemed to be a non-lawyer assistant as some ethics opinions have suggested about other forms of AI; and, in some cases, malpractice liability for the lawyer employing the chatbot (lawpracticetoday.org).

If a lawyer uses AI like ChatGPT to actually compose documents, then it will be necessary to supply the chatbot with confidential client information. This, of course, raises quite serious confidentiality issues. Comment 26 to KRPC 1.6 states:

Paragraph (c) requires a lawyer to act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client or who are subject to the lawyer’s supervision. See Rules 1.1, 5.1, and 5.3. The unauthorized access to, or the inadvertent or unauthorized disclosure of, information relating to the representation of a client does not constitute a violation of paragraph (c) if the lawyer has made reasonable efforts to prevent the access or disclosure. Factors to be considered in determining the reasonableness of the lawyer’s efforts include, but are not limited to, the sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of employing additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients (e.g., by making a device or important piece of software excessively difficult to use). A client may require the lawyer to implement special security measures not required by this Rule or may give informed consent to forgo security measures that would otherwise be required by this Rule. Whether a lawyer may be required to take additional steps to safeguard a client’s information in order to comply with other law, such as state and federal laws that govern data privacy or that impose notification requirements upon the loss of, or unauthorized access to, electronic information, is beyond the scope of these Rules. For a lawyer’s duties when sharing information with nonlawyers outside the lawyer’s own firm, see Rule 5.3, Comments [3]-[4].

The question is what constitutes “reasonable efforts” when using complex AI like ChatGPT when much of the technology is proprietary and companies may be unwilling to provide a lawyer with adequate information to know who has access to client confidential information.

Chair of the ABA Artificial Intelligence Committee Rafael Baca gives this advice:

Thankfully, similar issues of patient data confidentiality have already been accommodated by many software tools in the health care field, as mandated by the Affordable Care Act and earlier federal laws such as HIPAA. Because of a federally mandated head start toward full automation of electronic health records, legal practitioners can often incorporate existing commercial patient privacy software and health care IT network infrastructures as an ethical and economical basis for adding leading-edge, robust privacy, and security features to their law practice.

Consider the example of a legal practitioner applying anonymization algorithms and techniques to client data as it is received and stored. Categories of information from the collected raw data, such as names, addresses, expenditures, and other private information can be redacted using digital anonymization strategies to ensure client confidentiality under Rule 1.6. To limit costs, it is critical that a legal practitioner closely communicates to technical professionals what categories of data will require anonymization. In practice, the anonymized data is encrypted and can be used to build AI models while keeping critical anonymization in place. Such anonymization techniques are already used heavily in the medical field, so a pool of experienced healthcare software professionals is available for law practices to draw from. Similarly, law practices can look to experienced financial industry software professionals to apply the latest anonymization and privacy and security techniques used in banking and accounting.

Despite all the technical jargon and concepts, a legal practitioner should understand that data is ultimately collected and used by humans with software tools. Lawyers should rest assured that, fundamentally, humans design and curate the datasets for legal client records as well as to drive AI legal software algorithms. As such, lawyers must consider the underlying human bias inherent to any dataset used by AI algorithms. Lawyers must remain informed and in constant communication with their software professionals to ensure that the optimal results from AI algorithms arise from the highest-quality data.

Model Ethics Rules as Applied to Artificial Intelligence,” ABAS Law Practice Today (August 14, 2020).

What us quite clear from both Rule 1.6, Comment 26, and commentators like Baca, is that there will be significant confidentiality issues with the use of chatbot and problems that will require significant amounts of technical help. While this may be affordable to larger law firms, how will a solo practice or small law firm deal with the extra costs involved?

Many lawyers may be inclined to simply not use chatbots in their practices and do things the “old fashioned” way instead. However, if chatbots like ChatGPT can, in fact, perform tasks quicker and less expensively than human lawyers, do lawyers have an obligation under Rule 1.1 to use them? If so, how will they charge for those tasks under Rule 1.5? Indeed, if lawyers do use AI programs to compose documents, this may well mean that they will lose significant amounts of revenue unless they charge more for AI created documents than it actually costs them. This would seem to create an ethical problem in itself if one follows the guidance of ABA Formal Opinion 93-379, which states:

Perhaps the most difficult issue is the handling of charges to clients for the provision of in-house services. In this connection, the Committee has in view charges for photocopying, computer research, on-site meals, deliveries, and other similar items. Like professional fees, it seems clear that lawyers may pass on reasonable charges for these services. Thus, in the view of the Committee, the lawyer and the client may agree in advance that, for example, photocopying will be charged at $.15 per page, or messenger services will be provided at $5.00 per mile. However, the question arises what may be charged to the client, in the absence of a specific agreement to the contrary, when the client has simply been told that costs for these items will be charged to the client. We conclude that under those circumstances the lawyer is obliged to charge the client no more than the direct cost associated with the service (i.e., the actual cost of making a copy on the photocopy machine) plus a reasonable allocation of overhead expenses directly associated with the provision of the service (e.g., the salary of a photocopy machine operator).

This ABA Opinion has been applied, for instance, to legal search costs, and held to require that a lawyer charge no more than the cost of the search to the lawyer plus an allocated share of general overhead costs. Thus, lawyers may well be forced to charge out chatbots at their actual costs. This would eliminate the profit component in the billing for a human lawyer and would thus, potentially, lead to lower profits for the law firm. 

The issues discussed in this brief article are only the beginning. As lawyers and judges come to understand more about chatbots, it is inevitable that more ethical issues will be identified. Chatbots are here to stay, and lawyers must learn how they function, the dangers they present, and the potentially revolutionary changes that they will bring to the world of the law.

>>READ THE FULL ISSUE OF LEMR, Vol. 4, No. 3

About Joseph, Hollander & Craft LLC

Joseph, Hollander & Craft is a premier law firm representing criminal, civil and family law clients throughout Kansas and Missouri. When your business, your freedom, your property, or your career is at stake, you want the attorney standing beside you to be skilled, prepared, and relentless. From our offices in Kansas CityLawrenceOverland ParkTopeka and Wichita, our team of 20+ attorneys has you covered. We defend against life-changing criminal prosecutions. We protect children and property in divorce cases. We pursue relief for victims of trucking collisions and those who have suffered traumatic brain injuries due to the negligence of others. We fight allegations of professional misconduct against doctors, nurses, judges, attorneys, accountants, real estate agents and others. And we represent healthcare professionals and hospitals in civil litigation.

Our Locations

Kansas City | 816-673-3900

926 Cherry St
Kansas City, MO 64106
VISIT SITE

Lawrence | 785-856-0143

5200 Bob Billings Pkwy, #201
Lawrence, KS 66049
VISIT SITE

Overland Park | 913-948-9490

10104 W 105th St
Overland Park, KS 66212
VISIT SITE

Topeka | 785-234-3272

1508 SW Topeka Blvd
Topeka, KS 66612
VISIT SITE

Wichita | 316-262-9393

500 N Market St
Wichita, KS 67214
VISIT SITE

Contact Joseph, Hollander & Craft LLC

Contact Joseph, Hollander & Craft to discuss how our team of attorneys can help you.

Hidden
This field is for validation purposes and should be left unchanged.