AI Has Hallucinated Fake Details In At Least 117 Legal Cases, Leading To Penalties On Lawyers

AI systems are extremely powerful, but they have the unfortunate tendency at times to hallucinate, or make up non-existent information. And this can come to hurt AI users in a big way when such details are of paramount importance.

AI systems have hallucinated fake details in at least 117 legal cases, which have led to penalties on lawyers when they were discovered. A website has meticulously complied these cases, and new cases are being added every every month. As many as 20 such instances were from May 2025 alone, which indicates that even as LLMs have become more powerful with capabilities like Deep Reasoning and Deep Research, lawyers are still inadvertently adding fake information in their arguments.

Most of the instances are of AI systems making up fake citations. In the legal world, precedent is a big part of determining future cases. Lawyers often argue their cases based on how older cases of a similar nature were judged. But if these lawyers take to AI to draft their arguments, it can come up with completely fake cases that never existed.

The 117 complied instances are of “Non-existent or misrepresented cases”, “multiple fake citations and misquotations”, “reference to a seemingly fictitious judgment”, “at least 25 fabricated or non-existent case citations”, “multiple invented quotations from real or fictitious cases” and “fictitious file numbers and invented citations”. Once these fake citations were discovered by the courts, the lawyers that had used AI were either given warnings, or even hit with monetary penalties ranging from $1000 to $31,000. Lawyers were caught with these fake citations in countries ranging from the US to the UK to Israel.

The legal field requires lawyers to do a lot of research on past cases, which can be laborious and time-consuming. It appears that lawyers around the world have taken to AI in a big way, and are using it to analyze cases. But while AI systems must’ve been producing good results for so many lawyers to use them, they also seem to quietly insert completely fake details in their analysis. Lawyers who don’t double-check AI’s work can end up including these in their arguments, and if they opposing side is alert, can end up getting penalized. Which just goes to show that AI systems might not yet be ready for deployment in critical industries — while they seem to be assisting thousands of lawyers, they’re also leading to many of them ending up with egg on their face.

Posted in AI