AI-Hallucinated Legal Citations Lead to Court Scrutiny
The New York Times reports that the legal profession is increasingly plagued by AI blunders, with lawyers citing non-existent cases in court filings. A Texas bankruptcy court case highlighted the issue when a lawyer cited a non-existent 1985 case, Brasher v. Stewart, which was reportedly fabricated by AI. The judge responded by referring the lawyer to the state bar’s disciplinary committee and requiring six hours of AI training.
Lawyers like Robert Freund are actively tracking instances of AI misuse. Freund, a Los Angeles-based lawyer, discovered the case and fed it into an online database tracking legal AI misuse globally. He is part of a growing network of lawyers who monitor and report AI abuses committed by their peers. This group aims to raise awareness about the problem and put an end to it.
The problem is getting worse. Damien Charlotin, a lawyer and researcher in France, started an online database in April to track the issue. Initially, he found three or four examples a month. Now, he often receives that many in a day. Lawyers have helped him document 509 cases so far. They use legal tools like LexisNexis to monitor keywords related to AI misuse.
Court-ordered penalties are not having a deterrent effect, according to Freund. He has publicly flagged more than four dozen examples this year. The proof is that the issue continues to persist. Some of the filings include fake quotes from real cases or cite real cases that are irrelevant to their arguments. The legal vigilantes identify these issues by finding judges’ opinions scolding lawyers.
Overall, the legal community is concerned about the ongoing misuse of AI in legal proceedings. The issue raises concerns about the reliability of legal arguments and the potential for wrongful convictions or incorrect legal rulings. The situation highlights the need for stricter oversight and better training for legal professionals to prevent the misuse of AI in the courtroom.