Home » Ethical Considerations for Attorneys Using AI in Their Practice
,

Ethical Considerations for Attorneys Using AI in Their Practice

By: Don Ho

Estimated reading time: 5 minutes

The legal profession has been relatively slow to adopt artificial intelligence (AI) technologies compared to other industries. However, AI is rapidly advancing and attorneys who wish to maintain a competitive edge need to wisely leverage these powerful tools. AI can assist with legal research, contract review, due diligence, e-discovery, and predictive analytics. When utilized properly, AI has the potential to make attorneys more efficient and effective.

However, the use of AI by lawyers also raises a number of ethical considerations that must be carefully evaluated. As with any disruptive technology, a level of due diligence is required to ensure AI tools are being used in a manner that is compliant with governing rules of professional conduct. Let’s examine some of the key ethical issues surrounding AI in the legal sphere.

Client Confidentiality

One of the core ethical obligations for attorneys is the duty to protect client confidentiality. Any AI system that is privy to confidential client information must have robust security and access controls in place. Lawyers need to vet the data privacy policies and practices of AI vendors. Client data should be encrypted in transit and at rest. Role-based access restrictions are a must as well.

Additionally, attorneys should disclose their use of AI services to clients and obtain consent, especially if third-party cloud services are involved that could expose information to vendors. While AI does not change the underlying duty of confidentiality, law firms do have an ethical obligation to understand where data flows and ensure adequate safeguards exist.

Competence and Supervision

Lawyers have an ethical duty to provide competent representation and uphold their professional obligations. They must have a reasonable understanding of how any AI tools work and properly supervise their use. Attorneys cannot simply treat AI as a “black box” and blindly rely on outputs.

Firms adopting AI technologies need to provide training and oversight to ensure proper deployment, especially considering the potential for AI bias and errors. Algorithms can encode human prejudices from the training data. Furthermore, AI aimed at predicting outcomes, such as for litigation analytics, may rely on historical patterns that no longer hold true.

Lawyers maintaining proper competence over AI also need technical savvy. Do they understand the basics of machine learning and natural language processing? Do they know how to audit for bias, test performance, and interpret results correctly? As with any new technology, a reasonable amount of due diligence is required.

Independence and Outside Interests

Most AI systems are not developed in-house but rather licensed from third-party vendors. Law firms need to scrutinize those business relationships and any potential conflicts of interest. For example, an AI legal research tool backed by a major company could potentially prioritize results that favor that company.

Lawyers also need to be wary of relying too heavily on a single AI vendor that could undermine their professional independence. A lack of technological diversity could create over-reliance on one system. Independence means maintaining the ability to exercise objective professional judgment on behalf of clients, free from compromised outside influences.

Explicability and Due Process

Some AI algorithms, particularly in the domain of machine learning, can be opaque “black boxes” that lack explicability. If an AI system significantly contributes to a legal strategy or court ruling, there are due process concerns around transparently justifying the basis for that decision.

In civil litigation proceedings, for instance, all evidence and logic must be open to adversarial challenge and scrutiny by all parties. AI systems that rely on deep neural networks may be good at pattern recognition but struggle with providing clear rationale for their outputs.

Relatedly, government agencies and courts issuing rules about AI increasingly emphasize the need for human oversight and the ability to reproduce and inspect AI determinations. Systems lacking explicability could run afoul of these regulations.

Upholding Access to Justice

One of the noble principles of the legal profession is to ensure access to justice and zealously advocate on behalf of clients from all walks of life. Attorneys should be cautious about how their use of AI could negatively impact these ideals.

For example, if large firms gain a monopoly on the best AI tools through their buying power, that could create a Gap Analysis divide where only affluent clients receive the highest quality of technology-enabled legal counsel. There are also concerns about AI exacerbating historical biases and inequalities if not carefully monitored.

Lawyers using AI need to be intentional about ensuring technologies promote fairness, accountability, confidentiality and transparency throughout the justice system. Profit motives cannot outweigh these ethical duties.

As AI capabilities rapidly advance, it is crucial that the legal profession get ahead of these ethical risks. Bar associations, law firms, and legal tech providers need to establish clear governance frameworks around developing and utilizing AI responsibly in adherence to the rules of professional conduct. With proper oversight and guidance, AI can be a powerful tool for attorneys to leverage in a manner that is ethical and promotes the interests of clients and society.

Don Ho, Esq. is a Partner and Strategic Technology Counsel for Touchpoint Strategies – advising companies on growth strategies and the legal aspects of AI integration in their businesses. 

© The Regents of the University of California, 2024.

This article is also accessible on CEB DailyNews platform located at: https://research.ceb.com/posts/applying-section-230-to-facebooks-ad-business-new-case-has-big-implications