By Brent C. J. Britton
Every so often the legal world produces a story so neatly constructed it feels less like news than parable. This one practically arrives with a moral stapled to the front.
A New York lawyer, handling what should have been a routine dispute over a loan, submitted court filings peppered with citations that turned out to be entirely imaginary, or as we say in the parlance, hallucinated. Cases that had never been decided, never reported, never lived anywhere except inside the confident improvisations of a large language model.
That alone would have earned him a cautionary footnote, one among too many nowadays. What followed elevated the episode to something closer to farce.
When opposing counsel moved for sanctions, the lawyer responded with another brief, this one containing even more fabricated citations than the first. Having been caught once, he reached again for the same tool, apparently hoping the court would not notice that the second hallucination was wearing the same costume as the first.
Judge Joel Cohen of the New York Supreme Court noticed immediately. In an order that will be quoted for years, he distilled the problem with surgical clarity. “Counsel relied upon unvetted AI… to defend his use of unvetted AI.”
Yes. That sentence exists in an actual judicial order.
When plaintiff’s counsel flagged the original citations as hallucinated, defense counsel’s response insisted that the citations were merely “innocuous paraphrases of accurate legal principles,” a phrase that sounds plausible until you recall that paraphrasing requires something to exist in the first place. Also that is the opposite of how case citations work, in letter and spirit.
As the errors piled up, the lawyer demanded “forensic confirmation” that artificial intelligence had been used, a bold move given that the cases in question belonged more properly to fantasy literature. At oral argument he maintained, with a straight face, that none of the authorities were fabricated.
Oh dear reader, they were so fabricated.
Eventually the concession came. He had used AI after all, though not unvetted AI, or so he claimed. Judge Cohen dispatched that distinction briskly. If a tool produces citations that do not exist and you file them anyway, you have used unvetted AI by definition. Sanctions followed.
It is tempting to frame this as another story about artificial intelligence running amok, but that misses the point. Language models hallucinate because they are designed to sound right, not to be right. They improvise fluently and without shame. That quality is impressive in a brainstorming session and catastrophic in a legal filing.
The responsibility lies with the human who signs the paper.
Lawyers keep finding themselves in this position because the profession runs on pressure and deadlines and a heroic amount of caffeine. When a tool promises instant drafts and authoritative sounding research, the temptation to rely on it is real. Yet the ethical rules do not bend to accommodate temptation. When you sign your name to a brief, you assume responsibility for every word, whether it came from a junior associate, a senior partner, or a machine trained on the internet and insistent on making you happy even if it has to lie to you to do that.
There is a reason the profession imposes that burden. Lawyers occupy a peculiar role in a democratic society. We are the conduit through which private citizens address the power of the state. Centuries ago that meant appealing to a lord who might carry your grievance to the crown. Today it means navigating a body of law so technical that most people can only gesture at it from a distance. That asymmetry of knowledge is precisely why lawyers are held to a higher standard. Accuracy in this profession is foundational. The imperfect need not apply.
Courts have begun enforcing that expectation with increasing impatience. Sanctions for hallucinated citations are no longer isolated, and they have reached well beyond solo practitioners. Opposing counsel are paying attention too. Catching a fabricated case is now a tactical advantage.
None of this means that artificial intelligence has no place in legal practice. Used carefully, it can accelerate work that once consumed endless hours. What it cannot do is shoulder professional responsibility. Verification remains nondelegable. Proofreading, once considered drudgery, has become a form of self preservation.
That reality explains why tools designed to detect hallucinations are beginning to appear. They exist to catch errors before judges do, to keep confidence from outrunning competence. The goal is not to slow lawyers down, but to keep them upright.
The real lesson of the New York case is straightforward. When machines improvise, humans must verify. The alternative is trusting confident fluency over fact, and that is a gamble the legal profession has never been permitted to take.
AI is not replacing lawyers. It is replacing excuses.
Everything else is malpractice cosplay.
About the Author
Brent C.J. Britton is the Founder and Principal Attorney at Brent Britton Legal PLLC, a law firm built for the speed of innovation. Focused on M&A, intellectual property, and corporate strategy, the firm helps entrepreneurs, investors, and business leaders design smart structures, manage risk, and achieve legendary exits.
A former software engineer and MIT Media Lab alumnus, Brent sees law as “the code base for running a country.” He’s also the co-founder of BrentWorks, Inc., a startup inventing the future of law using AI tools.
Source Acknowledgment
This article discusses issues originally reported by Futurism.
Full credit to their reporting and documentation.

