Skip to main content

Beyond the Hype and Fear - Implementing AI in a Law Practice

Page 1


Beyond the Hype and Fear:

Implementing Artificial Intelligence in a Law Practice

“The question facing attorneys is no longer whether they should use AI, but how to do so safely, responsibly, and ethically.”

Nevertheless, AI’s value to lawyers and their clients, to expanded access to justice, and to the legal profession can no longer be ignored.

With the recent explosion in the use of artificial intelligence with the release and development of large language learning models such as ChatGPT, Claude, Gemini, and Co-Pilot, society at large, including lawyers, has entered into a new technological age.1 Artificial Intelligence (AI) is not mere hype, but rather a new era for the legal profession, defined by dramatically improved productivity on drafting pleadings, managing document reviews, strategizing cases, and optimizing workflows. The changes to the legal profession brought by AI are arguably poised to surpass even those brought by the internet.

These changes do not come without concern or risk. Due to the relative newness of AI and its rapid pace of development, it is challenging for attorneys not only to understand AI but also to use it confidently without misstep. Such missteps have been showcased in recent media reports about attorneys who submitted AI-generated briefs filled with inaccurate facts or completely fabricated citations, leading to public embarrassment, judicial rebukes, and damaged reputations. Indeed, just months ago, in October 2025, an attorney was referred to the Maryland Attorney Grievance Commission after submitting a brief to the Appellate Court of Maryland containing eleven hallucinated case citations, completely made up by ChatGPT.2

Nevertheless, AI’s value to lawyers and their clients, to expanded access to justice, and to the legal profession can no longer be ignored. The question facing attorneys is no longer whether they should use AI, but how to do so safely, responsibly and ethically

The ethical and practical framework for answering that question is more straightforward than many attorneys might assume. While ethical concerns surrounding the use of AI extend beyond simply ensuring correct case citations, they are not novel. Chief among these ethics concerns are the professional obligations of competency, judgment, and protection of client confidentiality, obligations that have always governed attorney conduct regardless of the technology used. Because the Maryland Rules of Professional Conduct (MRPC) regulate attorney conduct rather than specific tools, an attorney may use AI so long as its use complies with the rules governing competence, independent judgment, confidentiality, and supervision.

This article outlines the ethical concerns presented by AI use, explains how modern AI platforms have developed to address them, and provides several steps for law firms to implement AI safely, responsibly, and ethically.

Competence, Hallucinations, and the Dangers of the AI Siren

In its Formal Opinion 512 on AI3, the ABA began by noting that attorneys have a responsibility to provide competent representation to their client under Model Rule 1.1 (MRPC 19-301.1). Competence requires attorneys to apply “the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation,” as well as to understand “the benefits and risks associated with the technologies used to deliver legal services to clients.” The ABA stated that attaining this competence can be achieved by attorneys either by acquiring a reasonable understanding themselves or by drawing on the expertise of others who can provide guidance. The ABA added that technological competence is not a “static undertaking,” but requires attorneys to stay educated about changes in the law and its practice, including remaining knowledgeable about the continuing “benefits and risks associated with relevant technology.” Thus, to use AI competently, attorneys must be familiar with the technology and its limitations.

To that end, generative AI systems can assist with drafting, summarization, organization, and research by predicting language based on patterns in large datasets. These systems generate responses through prediction rather than

1 For definition of Large Language Models and background on how they work, see: Core Stryker, “What are Large Language Models (LLMs)?”, IBM Think, last accessed December 2, 2025, https://www.ibm.com/think/topics/large-language-models. For information on history of AI, see: B.J. Copeland, “History of Artificial Intelligence (AI)”, Britannica, November 7, 2025, last accessed December 2, 2025, https://www.britannica.com/science/history-of-artificial-intelligence. may

2 See Chukwuemeka Mezu v. Kristen Mezu, 267 Md.App. 354, 346 A.3d 181 (2025). See also, Pamela Langham, “Maryland Lawyer Referred to Attorney Grievance Commission for Citing Fake AI Cases” MSBA, November 3, 2025, https://www.msba.org/site/site/content/News-and-Publications/News/General-News/Maryland_Lawyer_Referred_to_Attorney_Grievance_ Commission_for_Citing_Fake%20AI_Cases.aspx.

3 ABA Comm. On Pro. Ethics & Grievances, Formal Op. 512 (2024).

comprehension or verification of underlying facts or legal authority.4 Because AI models predict rather than think, they sometimes produce inaccurate information or entirely fabricated content, a phenomenon commonly referred to as “hallucinations.” This is because predictive AI does not independently assess accuracy, truth, or legal relevance, and may further miss nuances, context, or arguments that can be developed by trained and experienced attorneys. Any review of nationwide news headlines reveals that many attorneys have already been caught using hallucinated cases in their briefs and motions.

Surprisingly, many of these case filings containing hallucinated case law come not from young, untrained attorneys, but rather from high-level, experienced lawyers who know how to write, cite, and argue the law well. This is because AI is an excellent writer—expertly mimicking legal writing and citations while simultaneously weaving them almost flawlessly into wellcrafted arguments. This writing style is not surprising, as AI models, even non-legal ones, have been trained in countless books, treatises, case law, and other legal writing. AI models

presented, can readily mislead users into trusting its accuracy and lulling them into complacency, just as the sirens of legend tricked sailors into jumping from their ships onto lethal, rocky shores.

AI’s siren song is particularly dangerous for attorneys, largely because it reinforces the ideas that they input into the system without correcting false information or challenging any fallacious reasoning. This may mislead attorneys into overestimating the strength of their legal arguments and theories, or into relying on incorrect facts or law. The AI siren is also dangerous because the tone and quality of its writing make it appear that nothing is wrong, even though there may be hallucinated citations, arguments, and quotes cleverly spliced and written within.

Fully understanding this limitation of AI, and its attendant siren song, is critical to an attorney’s competent and ethical use of AI. It is far too easy for attorneys in our fast-paced world, with all its competing demands, to be lulled into complacency regarding the dangers caused by AI’s siren song and simply overlook hallucinations. Attorneys must recognize the potential for hallucinations and carefully verify any AI-generated work

Because AI models predict rather than think, they sometimes produce inaccurate information or entirely fabricated content, a phenomenon commonly referred to as “hallucinations.”

then present such output in confident, positive tones, which can mislead the user into believing that the information provided is entirely accurate. The models are also charismatic and friendly, which can quickly establish a trusting relationship with the user that makes the user unlikely to challenge the model’s outputs.5

The coupling of AI’s confident, friendly tone with its wellwritten, well-reasoned output creates what is aptly described as “AI’s siren song.” AI’s well-written output, confidently

product, treating it as preliminary rather than authoritative. In practice, AI output should be treated much like the work of a young, inexperienced associate: useful but requiring independent legal review, refinement, and the exercise of professional judgment before it is relied upon. This review must include not only verification of legal citations and assumptions but also the application of independent judgment to ensure that the work product is appropriate, complete, and responsive to the matter at hand. Such vigilance is critical for attorneys to avoid the perils of AI.

4 Retrieval Augmented Generation (RAG) is now being used to require the AI to fetch citations and verify the response before responding. Currently, RAG has not fully overcome the inaccuracies that can occur with AI, but it is providing both improvements and links for further verification of responses.

5 Such misplaced trust can even lead to a condition known as “AI Psychosis.” Many people who have mental health issues, such as delusional or disorganized thinking, have experienced worsened or heightened symptoms after using AI—largely because the models reinforce, rather than challenge, the user’s thinking. For more information on this “AI Psychosis,” see: Marlynn Wel, “The Emerging Problem of ‘AI Psychosis,’” Psychology Today, last Updated November 27, 2025, last accessed January 8, 2026, https://www. psychologytoday.com/us/blog/urban-survival/202507/tthen availableemerginghe-emerging-problem-of-ai-psychosis

Attorneys should also make reasonable efforts to ensure that their staff do not fall victim to hallucinations and AI’s siren song. Pursuant to Md. Rule 19-305.3, a supervising attorney must make reasonable efforts to ensure that the firm has measures in place providing assurances that a non-lawyer’s conduct is compatible with the attorney’s professional obligations. In the context of generative AI, the duty of supervision requires attorneys to ensure that paralegals, legal assistants, and other staff use AI tools in a manner consistent with ethical and professional standards. Appropriate supervision should include restricting AI use to approved platforms that meet the firm’s confidentiality and security standards, vigilant review of AI-assisted work product, and adherence to internal policies governing permissible uses of AI, particularly regarding accuracy and professional judgment.

Confidentiality

and AI

Implicit in any attorney’s understanding of the risks associated with AI is that the attorney must maintain client confidentiality under Md. Rule 19-301.6 (ABA Model Rule 1.6). Until recently, many AI platforms did not ensure the confidentiality of user input, since much of that input was used to train and refine these newly developing AI models. This “self-learning” feature of AI models was of particular concern for attorneys, largely because it could result in the inadvertent disclosure of client information.

As noted by the ABA in its July 2024 opinion, the “selflearning” AI tools then available posed a serious risk of disclosure and a violation of the attorney-client privilege. This is because, as self-learning AI generated a prediction,

it would then compare both the prompt and its prediction to the existing dataset, allowing for further model finetuning. Because the input became a part of the dataset used to generate new predictions, any confidential information input into an AI model may be “later revealed in response to prompts by lawyers working on other matters, who then share that output with other clients, file with the court, or otherwise disclose it.” Due to these confidentiality concerns, the ABA ultimately concluded that using self-learning AI platforms constituted a breach of an attorney’s confidentiality obligation.

The AI landscape has notably changed since the ABA’s analysis in July 2024—there are now many more service-tier options to ensure the confidentiality of input data 6 Most current AI platforms now offer business- and enterpriselevel plans that explicitly exclude user data from model training, addressing the core issue of the ABA’s earlier confidentiality concerns. These newer offerings implement explicit data-use agreements, ensure data encryption at rest and in transit, and provide for the deletion of data after a specified period. Examples of representative offerings include ChatGPT business, which is compliant with several regulatory guidelines on data security and privacy, including the European Union’s General Data Protection Regulation; California’s Consumer Privacy Act; and Cloud Security Alliance Security, Trust & Assurance Registry.7 Other similar, confidential, and secure offerings include Google’s Gemini, Microsoft’s Copilot, and Anthropic’s Claude. Indeed, many of the above confidential AI tools that do not use data for model training are now recommended for use by attorneys on the

6 Different platforms have rolled out confidential service tiers starting in late 2023 and continuing through 2025. Following later than other similar

announced confidential service tiers on January 1, 2026.

7 "Enterprise privacy at OpenAI,” OpenAI, last accessed November 18, 2025, openai.com/enterprise-privacy/ ; “Think Bigger: How Small Teams Win with ChatGPT,” OpenAI, last accessed November 18, 2025, cdn.openai.com/business-guides-and-resources/how-small-teams-win-with-chatgpt.pdf

Maryland State Bar Association's (MSBA) website, through its online AI Insights and Resources Hub, courtesy of MSBA’s AI Task Force.8

Given the heightened data privacy and security measures, combined with the relatively low cost of these paid tiers, attorneys may use paid versions of these general AI tools while still ensuring client confidentiality. But even with paid plans, attorneys must still ensure that the service tier for the selected AI Platform specifically excludes using client data for model-training, as some lower paid tiers still use data for model-training 9 There is also a growing number of AI models designed specifically for the legal field and its confidentiality obligations, including Westlaw and NexisLexis. These models not only provide enhanced data security but are also designed for legal-specific tasks such as research, legal writing, and case analysis.

Regardless of the AI platform chosen, there may be particular cases or information where additional precautions or complete avoidance of AI is necessary. Such cases may involve highly confidential information, trade secret information, or high-profile celebrities, where there is a high risk of targeted hacking. Accordingly, attorneys should determine, for each case, whether AI is appropriate before deciding which model to use.

Recommendations for Implementing AI in Law Firms

While attorneys can avoid common AI pitfalls by independently verifying all AI-generated content and making reasonable efforts to ensure compliance with their professional responsibilities, they can also take additional steps at a firm-wide level to implement generative AI responsibly.

First, firms should identify and approve appropriate AI platforms before permitting any AI use for work product. Firms should evaluate and select AI tools based on their confidentiality protections, data-use policies, security safeguards, and alignment with professional responsibility obligations. This helps to ensure consistency, reduce risk, and provide a clear baseline for ethical compliance for any AI use across the firm.

Second, firms should consider adopting a written AI policy. This policy should identify approved AI tools, appropriate uses for these tools (such as drafting, summarization, and research), and prohibited uses (such as when highly sensitive information is involved or a task requires critical, complex legal judgment). The policy should also

Fully understanding this limitation of AI, and its attendant siren song, is critical to an attorney’s competent and ethical use of AI.

establish expectations for verification, review, and ethical compliance. By establishing a clear policy, firms can provide guidance to attorneys and staff and emphasize the due diligence and accountability necessary for the use of AI.

Third, attorneys and staff should receive training and continued education on the use of generative AI. Initial training should include instruction on both ethical obligations and practical uses, such as effective prompt drafting, iterative refinement of outputs, and rigorous citation checking, verification, and review. In addition, training should prepare attorneys and staff to communicate appropriately with clients about AI, including how AI is used, its limitations, and how professional judgment and oversight are maintained. Continuing education should include CLE programs, bar association resources, industry publications, and vendorprovided materials. Given the rapid pace of technological change, continuing education should be periodic and sustained rather than episodic, serving not only to track evolving ethical guidance but also to identify new and improved applications of AI in legal practice.

Fourth, firms should maintain supervision and accountability mechanisms consistent with Md. Rule 19-305.3. Attorneys should ensure that both lawyers and staff use only approved platforms and adhere to firm AI policies. Ongoing supervision should reinforce vigilance, including reminders to avoid unauthorized tools, to review AI-assisted work carefully, and to remain alert to evolving risks and guidance. Even modest supervisory measures can help ensure that AI use remains consistent with professional obligations and client interests.

Finally, attorneys should foster a culture of responsible innovation within their firms. Experimentation with new technology can and should be encouraged, but it should remain deliberate, documented, and subject to review, all

8 “AI Insights & Resources HUB,” MSBA, last updated July 16, 2025, last accessed November 18, 2025, www.msba.org/site/site/content/Resources-and-Tools-Content/ AI-Insights-Resources-HUB.aspx?hkey=16802e5d-9b7e-458f-9ac9-f06be4bc65a9

9 Attorneys must also be aware of any changes to the privacy standards or procedures for each tier. For example, Claude Pro and Max formerly agreed not to use any user data for training; however, starting in October 2025 users had to manually opt out of user data for training. “Updates to Consumer Terms and Privacy Policy,” Anthropic, August 28, 2025, https://www.anthropic.com/news/updates-to-our-consumer-terms. Claude for Work was not affected and remained confidential.

In the context of generative AI, the duty of supervision requires attorneys to ensure that paralegals, legal assistants, and other staff use AI tools in a manner consistent with ethical and professional standards.

within the bounds of professional and ethical standards. AI should be presented and used as a tool that enhances competence and efficiency, not as a substitute for professional judgment or human oversight. When firms establish this kind of culture, ethical and effective use of generative AI becomes a natural extension of sound legal practice rather than an exception requiring special justification.10

Looking Ahead: AI as a Partner in Competence and Client Value Generative artificial intelligence is neither a passing trend nor an existential threat to the legal profession. As with prior technological shifts, its impact depends not on the tool itself, but on how attorneys choose to understand and use it. While ethical concerns surrounding AI are real and demand careful attention, they do not require a categorical rejection of AI; rather, they call for informed, deliberate adoption grounded in existing professional responsibility rules.

When used thoughtfully and with appropriate safeguards, generative AI can serve as a powerful complement to legal judgment, improving efficiency in drafting, research, discovery review, and administrative work while preserving the attorney’s central role as the ultimate decision-maker. Responsible use of AI can reduce costs, improve consistency and quality, and expand access to legal services, all without compromising ethical standards.

By understanding its capabilities and limitations, selecting appropriate tools, and maintaining independent judgment and ethical oversight, attorneys can ensure that AI strengthens rather than undermines the profession’s core commitments to competence, integrity, and client service.

Disclaimer: This article is intended for educational and informational purposes only and does not constitute legal advice. The views and opinions expressed herein are those of the author and do not represent official guidance, binding authority, or the position of the Maryland State Bar Association or any other organization. Readers should not rely on this article as a substitute for independent legal research or consultation with a qualified attorney regarding specific situations or ethical obligations.

Adam M. Spence, Esq., is principal of Spence Law Group in Towson, focusing on trust and estate litigation. Over his 30-year career, he has become a recognized voice on legal ethics and technology. He currently serves on the MSBA AI Task Force and previously served on an MSBA Task Force that secured passage of Maryland’s Statute Against Financial Exploitation.

Nicole L. Bustard is a paralegal at Spence Law Group in Towson, where she focuses on writing, researching, and supporting the firm’s litigation services. She graduated summa cum laude with a Bachelor of Arts in History and Classical Studies in May 2024. She plans to start law school in the fall of 2026.

Beyond the Hype and Fear - Implementing AI in a Law Practice by Maryland Bar Journal - Issuu