Skip to content Skip to sidebar Skip to footer
AI Hallucinations Website And Linked IN

By Brent C. J. Britton

Earlier this month, a California appeals court slapped a lawyer with a $10,000 fine for filing a brief full of hallucinated case law, citations straight out of ChatGPT’s fever dream. The court published its opinion as a warning shot to every lawyer in the state: if you use AI and it lies, the consequences are still yours.

Honestly, it’s hard to argue with that. But it’s also hard not to laugh, and then sigh, at the sheer creativity of the excuses rolling out of courtrooms nationwide.

404 Media recently did a forensic dive into hundreds of cases where lawyers got caught using generative AI to write briefs, invent citations, or fabricate “authority.” The stories read like a tragicomic anthology: malware, migraines, malfunctioning assistants, and an outbreak of “the AI did it!”

It’s like the dog ate my homework… rewritten for the 2020s.

The Great Hallucination Epidemic

Let’s be clear: AI’s hallucinations are basically improvisation gone wild. 

That’s what it’s designed to do. Large language models predict patterns, not truth. When you ask a machine to “sound like a lawyer,” it will happily do so, whether or not the case it cites exists outside the LLM’s imagination.

As a former software engineer turned lawyer, I’ll tell you: code is just instructions. It has no ethics until you give it some. Law, in a way, is the same thing, code for running a country. And contracts: code for running business deals. All these systems depend on humans to debug them.

So when a lawyer lets AI draft their filings without verification, that’s not an “AI problem.” That’s a human laziness problem.

The Blame Parade

In the cases 404 Media reviewed, the creativity was… impressive.

  • A New York lawyer blamed vertigo, malware, and temperature instability.
  • A Florida lawyer said it was a paralegal’s fault, and also, it was pro bono.
  • A Louisiana attorney blamed Westlaw Precision.
  • Another said the “AI citations” were just part of a legal experiment (you know, for science).
  • My personal favorite? One attorney blamed a computer theme change.

Look , I get it. Legal work is brutal. Billables are tight. Clients want everything faster. Generative AI looks like salvation. 

But when your filings start quoting imaginary Supreme Court cases? That’s not “innovation.” That’s malpractice cosplay.

The Real Lesson

The takeaway isn’t don’t use AI. It’s use it like a professional tool, not a magic wand.

AI can supercharge legal research, summarize discovery, or draft first-pass briefs and letters, if you keep a lawyer in the loop. The danger comes when attorneys forget that when ChatGPT doesn’t find the right case in reality, it hallucinates like it’s on a bad trip.

Here’s the truth bomb: “AI doesn’t make bad lawyers. It exposes lazy ones.”

The best lawyers I know are using AI every day, responsibly. They verify citations. They label AI-generated content in their filings. They build workflows where human oversight is baked in.

This isn’t about banning AI; it’s about raising the bar (pun fully and firmly intended).

What Law Firms Should Do Right Now

If you’re running a firm, big or boutique, this is your moment to paper that puppy up.

  1. Write an AI-use policy. Treat it like a compliance framework.
  2. Require citation verification. “Trust, but verify.” Reagan popularized this old Russian proverb for the nuclear age, and it applies to robots too.
  3. Train your staff. Ignorance isn’t an excuse. “I didn’t know AI makes stuff up” won’t fly in any court. It never did.
  4. Use domain-specific tools. Westlaw, Lexis, Harvey, anything that actually cites real cases.
  5. Add human checkpoints. Before anything gets filed, a lawyer must touch the code.
  6. Check out BrentWorks. Our forthcoming product will root out hallucinations in your (and opposing counsel’s) documents.

The Future (and a Little Hope)

There’s an old joke about the law firm of the future being populated by a lawyer, an AI, and a German Shepard. The AI is there to practice law, and the German Shepard is there to bite the lawyer if she tries to touch the AI. (Pause for laughter).

We’re at the weird frontier where technology can write briefs, generate contracts, and simulate empathy. Unprecedented and miraculous, surely. And maybe frightening? But that’s not the end of law, it’s the next chapter.

In the future, lawyers who understand both ethics and algorithms will lead. Those who don’t will keep blaming malware, vertigo, and their assistants.

At Brent Britton Legal, we teach founders and professionals that your product isn’t the only thing that needs debugging, your processes do too.

AI is not the enemy. Misuse is.

And just like in startups, legal innovation thrives when you test, fail, learn, and try again, preferably without lying to the judge.

The Verdict

AI isn’t here to replace lawyers; it’s here to replace excuses.

And that’s a future I can absolutely live with.

About the Author

Brent C. J. Britton is the Founder and Principal Attorney at Brent Britton Legal PLLC, a law firm built for the speed of innovation. Focused on M&A, intellectual property, and corporate strategy, the firm helps entrepreneurs, investors, and business leaders design smart structures, manage risk, and achieve legendary exits.

A former software engineer and MIT Media Lab alumnus, Brent sees law as “the code base for running a country.” He’s also the co-founder of BrentWorks, Inc., a startup inventing the future of law using AI tools.

Source Acknowledgment
This article discusses issues originally reported by Jason Koebler for 404 Media

Full credit to 404 Media for their original reporting and case documentation.

Contact Me

Please share details about how I can help you!

All information will be kept confidential.

    Office

    Brent Britton Legal, PLLC
    3104 N Armenia Ave, Suite 2,
    Tampa FL 33607

     

    Email: bcjb@brentbritton.com

    Phone: +1 415-969-9933

    Check out my book

    © 2025 Brent Britton Legal, PLLC. All rights reserved.