Lawyers know the ethical obligations they owe clients require them to supervise the people that help them (like paralegals, clerks and vendors). But what about “the robots”? Do lawyers have an ethical duty to supervise artificial intelligence (AI) if they use it?
Indeed they do.
In fact, this month the American Bar Association Science and Technology Section unveiled a resolution addressing the various ethical implications lawyers face when using AI. The stated purpose of the resolution is “to urge courts and lawyers to address the emerging legal and ethical issues related to the usage of AI”.
Other than Model Rule of Professional Conduct 1.1 (Lawyer’s Duty of Competence) and a comment to the rule adopted by most states requiring lawyers to stay abreast of changes in technology, I had not spent a bunch of time thinking about which Rules of Professional Conduct are implicated by the use of AI. However, that all changed when I had a conversation with my friend and ethics attorney Jim Doppke on the Technically Legal Podcast.
Of course it makes sense that legal ethics rules now address technology. I never thought that a lawyer could just start blindly relying on software. But, one of the most intriguing things Jim pointed out during our talk was a change made in 2012 to Rule of Professional Conduct 5.3.
After the change, it is entitled “Responsibilities Regarding Nonlawyer Assistance”. It used to be called “Responsibilities Regarding Nonlawyer Assistants.” The rule requires lawyers to supervise the people helping them and ensure they do not do anything that violates the Rules.
As noted in the ABA resolution, the change to the Rule 5.3 is intended to “clairif[y] that the scope of Rule 5.3 encompasses nonlawyers whether human or not.” (Pretty sure there are no lawyers that are not human, but I digress…).
This wording change makes Rule 5.3 one of the main rules covering attorneys’ use of AI. What it means specifically is that when AI is used in a legal matter, the attorney responsible for the project must monitor the training and application of the algorithm.
You very well might be using AI if you use any of the following:
If you are using AI in your legal practice, Rule 5.3 is not the only Rule of Professional Conduct you need to think about. As explained in the ABA resolution, there are a few others:
Rule 1.1 (Duty of Competence, as noted above): “lawyers must have a basic understanding of how AI tools operate. While lawyers cannot be expected to know all the technical intricacies of AI systems, they are required to understand how AI technology produces results.”
Rule 1.4 (Duty to Communicate with Clients): “A lawyer’s duty of communication under Rule 1.4 includes discussing with his or her client the decision to use AI in providing legal services. . . . In certain circumstances, a lawyer’s decision not to use AI also may need to be communicated to the client if using AI would benefit the client. Indeed, the lawyer’s failure to use AI could implicate ABA Model Rule 1.5, which requires lawyer’s fees to be reasonable.”
Rule 1.6 (Duty of Confidentiality): “The use of some AI tools may require client confidences to be ‘shared’ with third-party vendors. As a result, lawyers must take appropriate steps to ensure that their clients’ information appropriately is safeguarded. Appropriate communication with the client also is necessary.”
All food for thought as the adoption of legal technology becomes more prevalent. For a “deeper dive” into the ethics of legal tech, I encourage you to take a listen to my conversation with Jim Doppke on the Technically Legal Podcast.