February 5, 2024 - Pamela Langham

Artificial Intelligence: A Legal Minefield for Lawyers

Artificial intelligence (AI) is transforming the legal profession in many ways, from automating tasks, to enhancing research and analysis. However, AI also poses significant challenges and risks for lawyers, especially in terms of ethical, regulatory, practice, and liability issues. In this article, the MSBA will explore some of the main legal pitfalls of AI for lawyers, such as ensuring compliance with professional standards, protecting client confidentiality, data security, accuracy and manipulation of photo and video evidence. Practical tips and best practices for lawyers to navigate the complex and evolving landscape of AI in the law will also be addressed.

Ethical Standards

  Lawyers who utilize artificial intelligence systems like predictive coding (technology-assisted review “TAR”), e-discovery software, legal research or automated document generation in their practice must adhere to the ethical rules outlined by their state bar association and the American Bar Association's Model Rules of Professional Conduct, for those jurisdictions who follow the ABA rules. This includes the rules governing competence, diligence, communications, confidentiality, supervision, honesty, and loyalty. More specifically, this includes competently representing clients by properly training any AI systems (predictive coding or TAR), thoroughly reviewing automated research outcomes, reviewing automatically generated documents, and closely supervising all employees under their supervision using the technology to prevent errors. To uphold a lawyer’s duty of communication, attorneys should clearly explain the capabilities and limitations of AI tools to clients and support staff. To remain ethical users of AI, lawyers should regularly consult bar opinions, take CLE courses on AI ethics, and proactively seek guidance when uncertain how AI intersects with the professional and ethical rules. With proper oversight and education, attorneys can tap the benefits of AI to enhance their legal services while simultaneously avoiding the potential harms.  

  Security and privacy risks.

 AI systems may be vulnerable to cyberattacks or misuse by malicious actors, and may pose threats to the confidentiality, security, and privacy of individuals, or organizations. These issues raise questions about how to ensure that AI systems are secure and trustworthy, and how to protect the data and information that they collect, process, or generate. Attorneys utilizing AI must take steps to protect the confidentiality and privacy of client information. Attorney-client privilege and work product protections still apply for data processed by AI, so lawyers must maintain confidentiality and prevent improper disclosure. When feeding data into machine learning systems, lawyers may want to consider only providing anonymous or redacted documents. If a client’s identification, contact information, financial records, medical history, or other confidential data is required for the AI analysis, express written consent should be obtained in advance. Lawyers may want to think about implementing cybersecurity protections like encryption, access controls, and data masking to secure AI systems. AI vendors should be vetted for rigorous privacy standards and robust security protocols. When in doubt, lawyers should consult with an IT specialist about precautions for using client data with AI technology. With proper consent and safeguards, attorneys can unlock AI's potential while still respecting client privacy.

  Accuracy

AI systems may produce erroneous or misleading legal research results, as previously reported by the MSBA. (See "NY Lawyers and Law Firm Sanctioned for Citing Fake Cases Derived from AI" and "Latest AI Legal Implications: Using ChatGPT for Legal Research, Not So Fast!"). But the inaccuracies do not stop there. AI applied in the legal context can produce flawed contract review, inaccurate e-discovery coding, and incorrect application of the law, or incorrect legal conclusions in a legal memorandum. These errors can have dire consequences for the lawyer that uses the AI, and their client, if relied upon blindly. Lawyers have an ethical duty to verify and validate the AI tools they use, and exercise independent judgment and due diligence if they make a decision to rely on the AI output. This may include, among other things, testing of AI tools to identify any discrepancies before use. Even after testing, lawyers should consider continuously monitoring the AI for degraded performance. In other words, AI should be supplementing not replacing a lawyers’ own research, analysis and judgment. Lawyers maintain ultimate responsibility for the AI output. With proper human supervision and a diligent lawyer, the legal profession can overcome the risks of AI errors.

Evidence

Photos and videos can be crucial pieces of evidence in a legal case, as they can provide visual information that may not be available from other sources. Photos and videos can capture the scene of a crime, an accident, a violation, or a dispute, and show details such as the location, the time, the people involved, the damages, the injuries, or the actions. Photos and videos can also corroborate or contradict the testimonies of witnesses, experts, or parties to the case, and help establish the credibility, reliability, or accuracy of their statements. Good litigators know that photos and videos can elicit emotional responses from the judge, jury, or public, and influence their perception of the case. Therefore, photos and videos can have a significant impact on the outcome of a legal case, and should be carefully collected, preserved, analyzed, and presented by the lawyers. Of course, photos and videos have been susceptible to manipulation, e.g. Photoshop. 

Enter AI, where now deepfakes can be used to threaten individuals and organizations and can alter or create fictitious photos and videos. It is important for lawyers to have a better understanding of deepfakes, what they are and how they are created. Broadly defined, deepfakes are “technically snippets of video in which the face or body has been digitally manipulated so that they appear to be someone else.” Facial recognition algorithms and a computer variational auto encoder (VAE) are used to create deepfakes. The VAE’s are programmed to recognize and encode a photo into low dimensional images and then decode someone else’s image back into a photo. For example, applying the VAE to the face of a young boy and a decoder on the face of Tom Cruise, would result in the face of Tom Cruise on the body of a young boy. This has already been done. A deepfake of Scarlett Johansson’s image and voice recently appeared on X (formerly Twitter) in an ad. The real Scarlett Johansson took immediate legal action as did fellow thespian Tom Hanks when a deepfake image of his likeness and voice promoted a dental plan.   

Artificial intelligence has the ability to create an event that never occurred, or showing a person saying something that they never uttered. YouTube recently acknowledged the problem when they imposed new rules on uploading videos to their content. YouTube will begin requiring disclosure of whether AI was generated to create the video. If AI was used to create the video then YouTube will be labeling the video alerting viewers. If a deepfake video shows a witness lying or contradicting themselves in a court case it can have serious implications for opposing counsel and for lawyers who inadvertently use the video as evidence. Lawyers should have a heightened awareness that AI manipulation is a possibility and more complex to detect.  

  Summary

As AI becomes more sophisticated and ubiquitous, it also poses significant challenges and risks for lawyers. The responsible application and awareness of AI in the legal profession remains a significant hurdle. By understanding AI pitfalls like the ones mentioned here, lawyers can help promote ethical and transparent application of AI that enhances legal services without removing essential human qualities and ethical standards.