MENU

Legal Writing

Please Do Your Best Not to Appear in the “AI Hallucination Database”

AI Hallucination Cases at https://www.damiencharlotin.com/hallucinationsYour name here—if you're not careful

Are AI ‘hallucinations” a growing problem for the legal profession? Some say yes. See, e.g., Michael Hiltzik, “AI ‘hallucinations’ are a growing problem for the legal profession,” Los Angeles Times (May 22, 2025). Those people are right.

How right are they? At least this much. Because that’s a link to the “AI Hallucination Database” webpage administered by Damien Charlotin, a lawyer, lecturer, and researcher in Paris who focuses on “AI, the law, and the multiple ways these coexist.” And one of the ways these coexist is not very well, as illustrated by the ever-increasing number of cases in which lawyers, many of whom are quite intelligent themselves, have gotten in trouble for submitting the legal work of certain alleged artificial intelligences to courts around the world. See, e.g., “‘Would You Be Surprised to Learn This Case Does Not Exist?'” (Apr. 24, 2025) (hint: he was). According to the database, these cases are being reported almost every day at this point.

I should be more specific: strictly speaking, the problem isn’t submitting something an AI has generated. It’s submitting that without confirming that everything a generative AI program has said is in fact true. A major subset of such mistakes involve failing to confirm that the case names it provided are in fact associated with real decisions by real courts. Because they may well not be.

Many of you likely know much more about AI than I do, so I won’t belabor this point, but an artificial intelligence is not “intelligent.” Arguably, generative AI isn’t even “artificial,” given that it’s “trained” by accessing and analyzing huge amounts of text that were created by natural persons (human beings). Algorithms then respond to prompts by creating text that looks like something a human would write, based on the aforementioned analysis of what the humans have written. This is a critical point that many people obviously do not understand: generative AI is not even designed to create statements that are true. It is designed to create statements that look like something a human would write. The result might be true, but that’s not the point. A generative AI cannot be relied on to create exclusively true statements because it does not know what truth is.

It also does not know what a lie is, although I find it amusing to say that kind of thing when, for example, a lawyer tries to claim due diligence by saying that he asked the AI whether the fake cases it provided were fake and it tells him no, they were real. See Mata v. Avianca, Inc., 22-cv-1461 (PKC) (S.D.N.Y. June 22, 2023). That was false, but not a lie. A generative AI does not know what truth is, nor is it even able to care in the slightest about that concept.

Wait—isn’t there a legal term of art for a statement that is provided without any regard for whether it is true or false? Yes: that term is “bullshit.” That is also the scientific term of art for this, as it happens. See, e.g., Michael Townsen Hicks, James Humphries, and Joe Slater, “ChatGPT is bullshit,” 26 Ethics & Information Tech. 1 (2024). What are often described as “hallucinations” by large language models are, as these researchers (and many others) have pointed out, “better understood as bullshit in the sense explored by [philosopher Harry] Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.”

Indifferent to the truth of your outputs is not something a lawyer can be (and stay licensed).

In a general sense this problem isn’t new. Lawyers also get in trouble by cutting and pasting from existing work product, not just by virtue of doing so but because they do it without checking the cites and arguments themselves. But at least in that case, the original was created by a human who, hopefully, did care about the truth of his or her outputs but at a minimum was able to care. One day, I have no doubt, there will be truly intelligent artificial intelligences that will have this ability. But today is not that day.

By “day” I mean more like “century” or maybe “millennium,” so please keep this in mind for a while.

Oh, I would also keep in mind that if generative AIs are being trained using text found via the internet, an ever-increasing percentage of that text is itself being written by AIs to begin with. If that doesn’t scare you straight, something like three million words of that text has been written by me and posted here. I do care about the truth of my outputs here, but do not guarantee them. (There is probably also some bullshit in here somewhere. I think my disclaimer makes that clear.)