Home » What a Legal Ethics Opinion on Generative AI Proficiency Reveals About Future-Proofing Your Law Practice
, ,

What a Legal Ethics Opinion on Generative AI Proficiency Reveals About Future-Proofing Your Law Practice

Estimated reading time: 5 minutes

The technology is new, but the ethical concerns it raises are not.

Generative artificial intelligence (AI) has the potential to transform the practice of law with its ability to quickly analyze vast sets of data and draft detailed legal documents and briefs. But its propensity for errors underscores the importance of ensuring accuracy, transparency and accountability in the practice of law — all of which have long been crucial attributes of human attorneys!

This was the subject of a recent legal ethics opinion that says lawyers must be proficient in AI, just as they’re responsible for understanding all technology relevant to their practice (including legal research platforms , e-discovery software and email). While this is not a California bar opinion, it is the latest of many clear indications that AI will have a long-term impact on the practice of law and sets the stage for important discussions about its use as well as necessary guardrails.

As with any new tool, generative AI comes with benefits and pitfalls, so it’s incumbent upon attorneys to stay informed and put their client’s interests first as they adapt. Or, as the ethics opinion puts it, “Now that it is here, attorneys need to know what it is and how (and if) to use it.”

Here are the key considerations for California attorneys incorporating generative AI into their legal research.

Expect hallucinations

Generative AI technology can produce incorrect answers or “hallucinations,” which often seem believable because large language models (LLMs) are designed to produce fluent and coherent text. These hallucinations happen because LLMs rely on statistical methods — rather than an actual understanding of the subject matter — to produce text based on a prompt.

So, while it’s not a bad idea to allow generative AI to save you time and resources when conducting legal research, you should expect to see the occasional error. In fact, one recent Stanford University study found that legal tools using generative AI hallucinate between 17% and 33% of the time.

Validate everything

Always check AI-generated information against reliable sources to ensure accuracy and reliability. This is especially crucial for the legal industry, which depends on precision and ethical integrity. Flag any references to case law, statutes, regulations or scholarly articles and check them against authoritative sources such as official court records and reliable legal databases to ensure they accurately reflect the content they’re referencing.

The consequences of not doing this are clear: misleading legal advice, compromised case outcomes and potential ethical violations. We’ve already seen instances like these play out in courts across the country, which have sanctioned attorneys for relying on inaccurate AI-generated information.

Protect your clients

While AI can expedite the retrieval and analysis of legal information, never let it compromise your duty to your clients. Put their best interests first by safeguarding all information relating to their case and treating their sensitive data with the utmost care. Guidance on AI use from the State Bar of California Standing Committee on Professional Responsibility and Conduct explains that attorneys have a duty of confidentiality, competence and diligence.

Never input confidential information into an open, public-facing AI platform such as ChatGPT or Gemini because these systems are not designed or equipped to guarantee the security and privacy of sensitive data. Exercise caution even when using private AI-powered legal research tools and be sure that they have robust security protocols in place. This should be clearly disclosed on a research tool’s website.

Disclose your AI use

Be open with your clients about how generative AI is supporting your practice. Explain what tools you’re using, how you’re using them and what impact that has on your firm’s capabilities.

Additionally, ensure all AI-related expenses are reasonable and disclosed to clients. The California Bar’s AI guidelines specify that firms cannot charge clients for time saved due to AI use, but they can charge for time spent refining AI prompts or reviewing AI-generated responses.

The guidance recommends introducing an agreement that outlines the fees and costs associated with the use of generative AI to maintain transparency and ensure client understanding and consent. This approach not only enhances trust but also reinforces your firm’s commitment to delivering informed and ethical legal representation. Your clients will likely appreciate the increased efficiency AI provides as long as you demonstrate the steps you’re taking to ensure accuracy and security.

Choose a platform you trust

Carefully consider which AI-powered platform you use to support your legal research. Legal-specific AI tools use more reliable and verifiable sources of data, whereas consumer-facing AI products rely on broader, less scrutinized data sources.

Look for platforms that are taking a strategic and thoughtful approach to integrating AI into their operations and products, rather than those rushing to adopt the latest technology. Also, legal research tools that augment their AI with human oversight are likely paying more attention to quality, accuracy, ethics and compliance.

Make sure you understand how the AI model has been trained, as the data it’s based on will directly influence its outputs and behavior. If an AI tool was trained on biased or unrepresentative data, for example, its decisions and recommendations could be flawed or even discriminatory. Understanding the training methodology will help you assess how reliable the platform is and better interpret its responses and suggestions.

Stay informed

It’s an overused phrase, but knowledge really is power when it comes to implementing new technology. Engage with the ethical issues and best practices for using AI by following legal news, completing continuing legal education courses and other training, and actively participating in professional forums and discussions.

Generative AI has the potential to redefine your firm’s approach to legal research by streamlining processes, providing faster, more concise answers and freeing up time for higher-level tasks. But the risks of using AI-powered legal tools make the critical thinking skills and ethical standards of attorneys even more important — and the secret to future-proofing your law firm.

At CEB, we believe AI is a transformative and exciting technology for the legal profession, but attorneys remain the primary content creators and curators of our legal content and offerings. Get in touch to schedule a free demo.