“I’ve never seen or heard of anything like this from any federal court,” Sen. Chuck Grassley (R-Iowa) said in a Senate floor speech Monday.
AI Hallucinations in Court Orders
The use of artificial intelligence (and sometimes artificial stupidity) has taken a dangerous turn. It’s deplorable when lawyers use AI to draft briefs with made-up precedents and false “facts,” at least without a thorough, human check. But briefs alone do not have legal effect, and the errors can be found by opposing counsel and the court.
But now there is a horrifying new turn. Daniel Wu reports for the Washington Post:
Two federal judges in New Jersey and Mississippi admitted this month that their offices used artificial intelligence to draft factually inaccurate court documents that included fake quotes and fictional litigants — drawing a rebuke from the head of the Senate Judiciary Committee.
The committee announced Thursday that the judges, Henry T. Wingate of the Southern District of Mississippi and Julien Xavier Neals of the District of New Jersey, admitted that their offices used AI in preparing the mistake-laden filings in the summer. They attributed the mistakes to a law clerk and a law school intern, respectively, according to letters the judges sent in response to a Senate inquiry.
Both faulty court documents were docketed and had to be hastily retracted after defendants alerted the judges to the errors. Neither judge explained the cause of the errors until the committee contacted them.
I have long suspected that lazy judges delegate too much to wet-behind-the-ears clerks. The problem even extends to the Supreme Court. This incident adds a new dimension.
At this point, it is well known that AI generates falsities, commonly known as “hallucinations.” To use AI to generate orders that have immediate legal effect on real people is beyond negligence. It is inexcusable recklessness. The direct perpetrators should be fired and disbarred, or in the intern’s case not admitted. The judges should be reprimanded for not coming clean immediately and voluntarily. The courts should adopt policies absolutely prohibiting the use of AI to draft opinions or orders, with severe penalties for violation.

On a different but related point, appellate judges enjoy a built-in grace period within which to issue modifications to their issued opinions that correct errors called to their attention, typically by one of the parties.
But litigants themselves have no comparable opportunity to correct errors in their briefs; at best, if they’re lucky enough to discover those errors before anyone else, they might attempt self-correction with a pleading that highlights their blunder with a confessional pleading titled “Errata.”
As frequently as court clean up their messes by orders of modification, you’d think they would be gracious enough not to upbraid litigants for their comparable missteps. Indeed, some judges are almost unfailingly gracious, but others not so much.
Every time I read an opinion that excoriates one of the parties for anything from a citation error to a gross mischaracterization of the record or the law—exactly the sort of thing courts themselves will do on occasion—I wonder whether its coming from the judge or some smarty-pants twenty-something who has not yet struggled with the pressures of real-world legal practice.
Valid points, as always, Ron.