May 14, 2024 - by Pamela Langham

Navigating Ethical Concerns for Lawyers Using AI

In the evolving legal technology landscape perpetuated by the increasing utilization of artificial intelligence, complex ethical concerns arise. Lawyers in Maryland have to remain cognizant of their obligations under the Maryland Attorneys’ Rules of Professional Conduct when using AI. The fundamental principles of competence, diligence, communication, reasonable fees, confidentiality, candor toward the tribunal, supervision, advertisement and misconduct guide the use of AI by lawyers. As AI continues to redefine the parameters of legal services, lawyers must remain steadfast in their ethical obligations, ensuring that their use of AI helps serve their client with the highest degree of professionalism and responsibility.

This article highlights some of the ethical concerns a Maryland lawyer may want to consider before using AI for legal services or legal administration. At this time there have been no specific professional rules addressing the use of AI in Maryland, but this article provides an overview of which current rules arguably encompass the use of AI.

Rule 301.1 Competency and Rule 301.3 Diligence

Maryland Attorneys’ Rules of Professional Conduct (MD R Attorneys) Rule 301.1 provides: “An attorney shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Comment 5 states that “competent handling of a particular matter includes inquiry into and analysis of the factual and legal elements of the problem, and use of methods and procedures meeting the standards of competent practitioners.” Comment 6 is also relevant stating a lawyer shall “keep abreast of changes in the law and its practice, engage in continuing study and education and comply with all continuing legal education requirements to which the attorney is subject.”

Maryland, unlike the ABA and a handful of other states, has not implemented a specific technology education or use requirement for lawyers. However, the duty of competence for lawyers, if they decide to use AI in their practice, requires lawyers to understand the capabilities and limitations of the AI tools they use. This includes skill in using the technology and staying informed and up to date on AI technology. Additionally, lawyers using AI may want to think about keeping abreast of the benefits and risks associated with AI.

A lawyer’s requirement to be competent and exercise due diligence also suggests that an attorney should at all times review all work products produced by AI to ensure its accuracy and relevance to the particular legal issue at hand. Further, a lawyer should not substitute their own professional judgment with AI. Professor Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence and IT Professor at the Graduate School of Business, and considered the Godmother of AI once said, “artificial intelligence is not a substitute for human intelligence; it is a tool to amplify human creativity and ingenuity.” Likewise, artificial intelligence should not be over relied upon by a lawyer or substituted for a human lawyer’s professional judgment. Taken one step further, this would include ensuring the accuracy of a subordinate lawyer’s or non-lawyer’s work product produced partially or wholly by AI, as discussed in more detail below.

Rule 301.2 Scope of Representation and Allocation of Authority Between Client and Attorney and Rule 301.4 Communication

A lawyer’s ethical obligations under MD R Attorneys, 301.2 to consult with the client about the means to be used to accomplish the client’s objectives in order for the client to effectively participate in the representation may be invoked if a lawyer decides to use AI. Rule 301.4, MD R Attorneys, also requires reasonable communication with a client. It remains to be seen whether these rules collectively require an attorney to consult with a client before using AI in their representation. After all, consent from a client is not necessarily required to use email, the traditional electronic legal research platforms or other technology. However, attorneys may want to add language to their legal services agreement notifying clients that they may use AI to deliver legal services. Obtaining approval from the client before using AI may be the best practice since AI is not yet mainstream in the legal realm. Lawyers may also want to consider explaining the role of AI when providing legal services, including the potential risks and benefits, to ensure informed consent.

Rule 301.5  Fees

A basic tenet of Rule 301.5, MD R Attorneys, is that an attorney’s fee and expenses shall be reasonable. An attorney's use of AI may invoke this rule. Factors to determine the reasonableness of a fee include "the time and labor required, the novelty and difficulty of the questions involved, and the skill requisite to perform the legal service properly."  See MD R Attorneys, Rule 301.5(a)(1). If the use of AI substantially reduces the time and labor required to write an opinion letter, analyze a complaint, conduct legal research, write the first draft of a pleading or analyze contractual terms, then Rule 301.5 may trigger a reconsideration of the amount to charge a client for those legal services.

Rule 301.6 Confidentiality of Information

Subsection (a) of MD R Attorneys, Rule 301.6 states that an “attorney shall not reveal information relating to representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation, or the disclosure is permitted by section (b) of this Rule.” Comment 2 reminds lawyers that a “fundamental principle in the client-attorney relationship is that, in the absence of the client's informed consent, the attorney must not reveal information relating to the representation.” In order to exercise an attorney’s duty of confidentiality, an attorney must act competently to preserve confidentiality. See Comment 19, MD R Attorneys, Rule 301.6. This includes protecting against “inadvertent or unauthorized disclosure by the attorney or other persons who are participating in the representation of the client or who are subject to the attorney’s supervision.” Id. Comment 20 to the rule requires lawyers to “take reasonable precautions to prevent [information relating to the representation of a client] . . . from coming into the hands of unintended recipients.”

Confidentiality of a client’s information and protection of the attorney-client communication and work product privileges is one of the most important tasks a lawyer is required to hold sacred. Lawyers must ensure that AI tools do not compromise client data. When using AI, there is a possibility that a client’s confidential information may be used by the large language model to learn and adapt over time. This is especially true if a lawyer uses a third-party AI service. Over time the AI LLM could disclose that confidential material when it responds to another user’s inquiries. Lawyers using AI may want to take reasonable efforts to prevent accidental or unauthorized disclosure of client information. To protect client data, lawyers may think about sanitizing any client identifiable information before inputting it into a third-party AI large language model (LLM). Lawyers should also verify the security of third-party AI providers. Among other safeguards, lawyers should conduct reference checks on potential AI vendors, carefully examine their security protocols, consult with cybersecurity professionals, inquire about their hiring practices, and ensure that confidentiality agreements are in place to ensure proper safeguards are in place with the third-party vendor including storage security. By following these steps lawyers can minimize risks and ensure that their client’s confidential information remains secure. Similar safeguards utilized to protect client data in emails and cloud storage should be considered here. Of course, using a unique legal chatbot trained specifically on a law firm’s own data should mitigate some of this risk.

Rule 303.3  Candor Toward the Tribunal

MD R Attorneys, Rule 303.3 prohibits a lawyer from “knowingly” making a false statement of fact or law to a tribunal or failing to correct a “false statement of material fact or law previously made to the tribunal. . .”  Id.  Subsection (4) of this rule requires an attorney who has “offered material evidence and comes to know of its falsity” to take reasonable remedial measures.

A lawyer should review all AI outputs and ensure their accuracy before incorporating an AI output into a pleading and filing with the court. The duties of competence and diligence are applicable here, but Rule 303.3 also reminds a lawyer that filing a pleading with the court with inaccurate AI generative output, including hallucinated cases, real cases with bad citations, or frivolous legal arguments, is also prohibited. Examples of lawyers around the country citing hallucinated cases produced by AI in court pleadings are numerous.  See NY Lawyers Sanctioned for Citing Fake Cases Derived from AI, MSBA Blog, June 28, 2023. Just being mindful that some AI tools may produce hallucinated cases is a good practice. Deepfakes may trigger a lawyer’s obligation under this rule if a lawyer becomes aware of the fraudulent nature of a photo, audio or video after offering it as material evidence. For more reading on deepfakes please see “Deepfakes: The Coming Evidentiary Crisis,” MSBA Blog, March 21, 2024.

Rule 305.1 Responsibilities of Partners, Managers, and Supervisory Attorneys

Partners and managing attorneys in law firms bear significant responsibility to ensure the firm and its attorneys adhere to the ethical and professional rules. This extends to any attorney who exercises direct oversight over subordinate lawyers. If an attorney, aware of misconduct, either endorses it or fails to act to prevent its repercussions, they are held accountable. This accountability is not limited to those who actively participate in the misconduct but also includes those in positions of power who, despite being aware, neglect to take corrective measures. Thus, weaved within the rule is a tapestry of accountability, emphasizing the importance of proactive and reactive measures to uphold the integrity of the legal profession.

It is reasonable to conclude that this rule incorporates the responsibility of supervising the use of AI by subordinate attorneys. More likely than not this includes the responsibility of ensuring that the work product produced by subordinate attorneys with the assistance of AI is accurate, e.g. no hallucinated cases. Concomitant with this responsibility includes ensuring a client’s confidential data is not disclosed to third parties through a subordinate’s AI prompt or input.    

Supervision extends to AI tools, with lawyers responsible for the outputs generated and for ensuring that the tools do not replace their professional judgment. Law firms may want to consider implementing policies on the acceptable applications of generative AI within their firms.

Rule 305.3 Responsibilities Regarding Non-Attorney Assistants

Partners or attorneys with managerial authority in a law firm are tasked with the duty to implement measures that assure a non-attorney's actions align with the attorney's professional responsibilities. This oversight includes ensuring a non-attorney’s conduct does not conflict with the attorney's ethical obligations. Should a non-attorney's actions contravene the Maryland Attorneys' Rules of Professional Conduct, the supervising attorney is held accountable if they either endorse the behavior or fail to correct it when possible. Moreover, attorneys who engage former attorneys previously disbarred or suspended must enforce strict supervision over their law-related activities, which must occur in an environment overseen by a fully responsible attorney. The requirements of this rule stretch to any assistants hired by the attorney for assistance in representing their clients including legal assistants, secretaries, investigators, receptionists, law student interns and paraprofessionals.

The importance of a law firm policy on the acceptable applications of generative AI within the firm cannot be stressed enough when supervising non-attorneys. Any such policy should be designed to ensure that both lawyers and non-lawyer staff adhere to their professional duties while engaging with this technology. Proactive measures setting boundaries for the use of AI in the legal profession are crucial in maintaining the integrity of legal services, keeping a client’s data confidential and safeguarding the professional responsibilities of all legal practitioners.

Rule 307.1 Communications Concerning An Attorney’s Services and Rule 307.2  Advertising

Lawyers must follow certain rules for communications concerning their legal services including advertisements. MD R Attorneys, Rule 7.1 prohibits attorneys from making “false or misleading communications about the attorney or attorney’s services.” Advertisements should be grounded in fact in order to shun unjustified expectations about the results a lawyer may achieve. MD R Attorneys, Rule 307.2.

Lawyers who employ AI chatbots for advertisements, potential client engagement or marketing initiatives should take these rules into consideration. A lawyer should recognize their accountability for any inaccuracies disseminated by such AI systems. It may be advisable to ensure transparency by clearly disclosing the use of chatbots in interactions with prospective clients. Furthermore, in instances where individuals are already represented by legal counsel, it is essential that the chatbot's programming includes restrictions to prevent overstepping professional boundaries.

Rule 308.4 Misconduct

Rule 308.4 prohibits a lawyer from engaging “in conduct involving dishonesty, fraud, deceit or misrepresentation,” and engaging “in conduct that is prejudicial to the administration of justice.”  The requirements of this rule with the use of AI is self-evident. Hallucinated cases generated by AI that are incorporated into a pleading and filed with the court may constitute a violation of this rule, among others.

Moreover, it has been documented that some LLMs may be biased or discriminatory in their responses. Section (e) of this rule prohibits an attorney to “knowingly manifest by words or conduct when acting in a professional capacity bias or prejudice based upon race, sex, religion, national origin, disability, age, sexual orientation or socioeconomic status when such action is prejudicial to the administration of justice, provided, however, that legitimate advocacy is not a violation of this section.” It is important for legal professionals to recognize the potential prejudices encoded within AI algorithms and the consequent risks they may pose.


AI has the potential to improve legal services on a scale never seen before. AI can and will benefit lawyers in similar ways that other technologies have already. Email communications have freed lawyers’ time on lengthy phone conferences. The traditional electronic research platforms have freed a lawyer from spending hours in a law library looking through periodicals and reporters for the perfect case. As AI technology evolves, so too must a lawyer’s understanding of AI and its application in the legal profession, and more importantly on the ethical obligations of using AI. The objective is to integrate generative AI into legal practice, whether in the delivery of legal services or administrative functions, in a manner that upholds the profession's standards and enhances the quality of legal services. Lawyers are already successfully using AI in a positive, ethical manner and for the benefit of their clients. In closing, and modifying one of Professor Fei-Fei Li’s predictions for the future, "AI won't replace [lawyers], but [lawyers] using AI will."