AI Errors in American Courts: An Ignored Threat

6 minutes de lecture

The growing use of artificial intelligence (AI) in the American judicial system raises major concerns, particularly the risk that judges overlook AI errors, such as AI hallucinations, in legal documents. According to John Browning, a former judge and legal expert, it is “frighteningly likely” that many courts, particularly overburdened lower courts, neglect these errors, thus compromising the integrity of judicial decisions.


AI Hallucinations: An Emerging Threat in Courts

AI hallucinations, these errors where artificial intelligence models generate false but plausible information, are increasingly found in legal documents. A recent example, reported by Ars Technica, concerns a divorce case in Georgia, where an order drafted by attorney Diana Lynch contained fictitious legal citations generated by an AI. This type of error, though detected in this specific case, could become commonplace in overwhelmed courts, where judges often rely on attorneys to draft proposed orders.

Indeed, judges, faced with crushing workloads, risk automatically validating documents containing AI errors, such as citations of non-existent cases or flawed arguments. According to Browning, this problem is particularly concerning in civil and criminal cases, where judges depend on attorney proposals to manage voluminous files.


Why Are Courts Vulnerable?

Several factors explain this vulnerability to AI errors:

1. Court Overload

Lower courts, which handle a large number of cases, often lack time to thoroughly verify each document. This pressure leads judges to trust attorney proposals, increasing the risk of incorporating AI hallucinations into judicial decisions.

2. Lack of Technological Competency

Only two states, Michigan and West Virginia, have adopted ethical opinions requiring judges to have technological competency in AI. In most jurisdictions, judges are not trained to identify red flags related to AI-generated content, such as questionable legal citations or factual inconsistencies.

3. Growing Dependence on AI

AI tools, such as ChatGPT, make it easier for attorneys and unrepresented litigants to draft legal documents, thus increasing the volume of filed cases. According to the National Center for State Courts, this accessibility increases court workloads, making the detection of AI errors even more difficult.

4. Absence of Clear Policies

Many law firms and courts have not yet established clear policies on AI use. A case in Utah, where an unauthorized office employee had used ChatGPT, revealed the absence of guidelines, resulting in the filing of fictitious citations and the employee’s termination.


Consequences of AI Errors in the Judicial System

AI errors in courts can have serious consequences:

  • Harm to Justice: Decisions based on incorrect citations or facts risk depriving parties of a fair defense, increasing costs for opposing counsel and courts.
  • Erosion of Trust: If judges validate documents containing AI hallucinations, the authority and credibility of courts could be compromised.
  • Sanctions for Attorneys: Cases like that of Mike Lindell, where attorneys were sanctioned $6,000 for incorrect citations generated by an AI, show that legal professionals risk fines and professional setbacks.

Furthermore, AI hallucinations are not limited to legal citations. In a case involving Anthropic, errors generated by the Claude chatbot led to incorrect citations in expert testimony, highlighting risks even for professionals aware of AI limitations.


Solutions to Counter AI Errors

To reduce risks related to AI errors, several measures are being considered:

1. Judge Training

Courts must invest in training judges to recognize AI hallucinations. States like Virginia and Montana have already adopted laws requiring human oversight of AI systems used in judicial decisions, while others have created task forces to develop guidelines.

2. Strict Policies for Attorneys

Law firms must establish clear protocols for AI use, including independent verification of citations and facts. The Utah case, where an employee was fired for using ChatGPT without supervision, demonstrates the importance of such policies.

3. Enhanced Document Oversight

Judges must carefully examine proposed orders and submitted documents, particularly in high-volume jurisdictions. Tools based on artificial intelligence could be developed to automatically detect questionable citations or inconsistencies.

4. Regulation and Ethics

Experts, such as those interviewed by the Pew Research Center, emphasize the need for regulation to govern AI use in the judicial system. Greater inclusion of diverse perspectives in AI development could also reduce bias and errors.


A Challenge for the Future of Justice

The integration of artificial intelligence into courts offers opportunities, such as faster drafting of legal documents, but it requires increased vigilance. As Browning emphasizes, “ultimate responsibility rests with the human judge, with their qualities of legal reasoning, empathy, and ethics.” AI errors, if undetected, could not only compromise individual cases but also erode public confidence in the judicial system.

In conclusion, AI errors represent a growing threat to American courts, particularly in overburdened jurisdictions. By strengthening judge training, establishing clear policies, and promoting rigorous oversight, the judicial system can leverage the benefits of AI while minimizing its risks. This transition will require collective effort to preserve the integrity of justice in the digital age.

Sources

https://arstechnica.com/tech-policy/2025/07/its-frighteningly-likely-many-us-courts-will-overlook-ai-errors-expert-says

https://law.mit.edu/pub/generative-ai-responsible-use-for-law/release/9

https://www.ncsc.org/resources-courts/genai-revolutionizing-court-filings

Partager cet article
Laisser un commentaire