
By: Dustin Lennon-Jones
What happens when your legal advocate isn’t human? In an age where generative AI can write essays, compose music, and even mimic human speech with startling realism, some have begun to wonder: can it argue a case in court? As AI-powered services become more sophisticated, the legal world finds itself at a crossroads: how far should generative AI be allowed to go in the courtroom? The complex, evolving relationship between AI and the law raises important questions about the ethical use of AI in the legal community and the fine line between innovation and deception.
GenAI as your lawyer?
Jerome Dewald was representing himself in a dispute with a former employer, but due to his previous battle with throat cancer, he was having trouble articulating himself in court. With his case on appeal to a New York State appellate court, Dewald applied for and was granted permission to play a recorded video in place of standard oral argument.
However, when the video began, the five-judge panel was instead greeted by a man who looked nothing like Dewald. When one judge asked him if the man was his lawyer, Dewald responded that he had generated it and that the man in the video was not real. Associate Justice Sallie Manzanet-Daniels was not pleased, rebuking Dewald for misleading the court and attempting to promote his own business.
Dewald had utilized the services of Tavus, a San Francisco-based generative AI start-up. The product Dewald was attempting to use allows users to upload a video of themselves talking, at which point the program will generate a “photo-realistic replica” of the user. This digital alter-ego can be fed a script which it can then read aloud in the user’s voice. However, he was unsuccessful at creating a satisfactory replica of himself and settled on a stock avatar.
Though not utilized by Dewald, Tavus’ other product has the potential to be even more problematic. The conversational video interface (CVI) operates much in the same way as the replica program but also has the ability to engage in a conversation on its own. According to Tavus, the CVI allows a user to build an AI agent that “feel[s] like talking to an actual person.” This means that, theoretically at least, AI agents could be used to participate in oral arguments and respond to a judge’s questioning.
Legal Framework for the use of AI
Under New York law, parties can appear and participate in civil actions personally or be represented by a licensed attorney. Had Dewald used the CVI software, this would clearly have been unlawful. Since the AI avatar is neither a party nor a licensed attorney, it cannot appear in court. The use of a script-reading avatar is less clear.
Judge Manzanet-Daniels seemed to most take issue with the fact that the court had been misled by Dewald, telling him that it “would have been nice to know that when you made your application.” This may indicate, as Dr. Adam Wandt has speculated, that if Dewald had disclosed his intent to use an AI avatar, his application would have been denied.
Some courts have embraced this disclosure approach. One trial court in New York requires that any document filed in court that was prepared with the use of AI to contain a statement identifying the portions drafted by AI and certifying that it was reviewed by a human for accuracy. The Western District of North Carolina took the opposite stance, requiring a certification that no AI was used in the preparation of court filings. While not requiring disclosure, the Eastern District of Missouri warns litigants that they are responsible for the content generated by AI.
As is often the case, new technology is advancing faster than the rules that regulate them. Dewald’s use of the Tavus service is part of a growing trend of the misuse of AI which underscores the need for guidance. In one such case, two lawyers who used ChatGPT to perform legal research were fined $5,000 after it made up non-existent cases which they then cited. Michael Cohen, the former personal attorney of Donald Trump, used Google’s AI service Bard which similarly hallucinated cases which Cohen cited in a motion. And in perhaps the most egregious case, the FTC fined DoNotPay, a legal information and “self-help” company, $193,000 for advertising “robot lawyers” who could replace humans in drafting legal documents. Unsurprisingly, the FTC found the service to be ineffective.
AI is Inevitable
While perhaps lacking a place in the courtroom, AI is becoming an increasingly embraced tool. In a survey of legal professionals by Thomson Reuters, 72% of respondents view AI as a force for good in the profession. Half of responding law firms stated that exploring the potential uses of AI and implementing them was their top priority. The potential benefits could be game-changing. At the current rate of adoption, AI could save an average of 200 hours per person in 2025. This could allow lawyers to spend more time on more expertise-driven tasks, business development, or simply have more time for themselves.
The Arizona Supreme Court is leading the way to reap some of these benefits. In March, they rolled out a new AI spokesperson program using a service similar to the one used by Dewald. The justice who authored a given opinion also will draft a script, which is then published to their website so that the public can have an easy-to-understand explanation of the results of a case. Court spokesman Alberto Rodriguez said that this cut what used to be an hours-long process down to just 30 minutes.
Conclusion
For better or for worse, the Pandora’s box of generative AI is open. Its potential to save lawyer’s vast amounts of time and increase accessibility to the public have already been demonstrated. However, cases such as Dewald’s and DoNotPay serve as an important reminder of the limitations. Generative AI is a useful tool in a lawyer’s belt, but it is not a replacement for the lawyer itself.
#WJLTA #GenerativeAI #ethics #robotlawyer