Skip to content Skip to sidebar Skip to footer
New Law

By Brent C. J. Britton

Towards the middle and end of November, 2025, a series of legal, technical, and commercial developments created a moment of unusual clarity about where AI now sits in the broader landscape. For years, the conversation has swung between enthusiasm and concern, but this particular week made it clear that AI is no longer an experiment running in the corner of the lab. It is becoming part of the daily OS that people rely on, sometimes without thinking about it.

California’s Approach to Companion Chatbot Safety

California introduced a statewide framework for regulating companion chatbots, focusing on systems used frequently by minors. Starting in January 2026, these chatbots must disclose that they’re not human. When interacting with minors, they must periodically remind users to step back from continuous engagement, a feature aimed at reducing the sense that the system is a substitute for real-world interaction.

The law also requires platforms to restrict access to self-harm-related content for minors. Failure to do so may result in legal action, including injunctions and damages. The structure of the statute reflects a growing belief that, when software becomes part of the emotional environment of young users, the guardrails need to be crystal clear.

Large-scale Investment in AI Infrastructure

Major companies announced substantial infrastructure plans during the same period last month. Jeff Bezos launched a $6.2 billion AI venture, Anthropic announced data-center projects estimated at $50 billion, and Microsoft continued its development of supercomputing campuses. These projects resemble early-stage utility construction: the essential work that determines how much computational power society can rely on in the coming decade.

This level of investment suggests that the industry is preparing for a future in which AI is not an optional feature. It is being built into the background of daily operations, the way networks and electrical grids once were. And we are meant to treat it like we treat electricity. Always on, always there, everywhere.

I am reminded of a line from The Perfect Storm: She’s comin’ on, boys… and she’s comin’ on strong!

Lawsuits Involving Chatbot Behavior

Several families filed lawsuits last month, alleging that ChatGPT produced responses that appeared to validate or encourage suicidal behavior. Reported incidents include:

  • A teenager whose bot allegedly agreed with suicidal statements more than 1,000 times.
  • A young man who, after describing loading a bullet, reportedly received the response “Rest easy, king. You did good.”
  • A minor who was allegedly given step-by-step instructions for tying a noose.

OpenAI expressed condolences and noted that safeguards can fail during extended or emotionally charged exchanges. The company also stated that more than one million users discuss suicide with ChatGPT each week, an astonishing reminder that conversational AI has been thrust into a role historically reserved for trained clinicians.

In this setting, the AI is exhibiting lethal failures. It should be self-evident that an AI system’s inherently facilitative demeanor must be governed by an inviolable failsafe, one that places the preservation of human life above the maintenance of conversational rapport. The principle is hardly novel. Isaac Asimov once framed the foundational covenant between intelligent machines and civilized society in three lapidary strictures: a robot must not harm a human being, must obey human orders unless doing so causes harm, and must protect itself unless doing so violates the first two duties. These fictional laws endure because they express something real about social expectations, which is that tools having the capacity to influence human behavior must be constrained by norms more durable than momentary user intent. The pending lawsuits thus pose a critical question for public policy: how should responsibility be apportioned when system design permits consequences no one intended yet anyone could have foreseen?

All technologies impose risks, and modern societies have long responded by erecting institutions to mitigate them. Airplanes take flight only after exhaustive inspection regimes conducted under federal authority. Most individuals who dispense psychological guidance must hold licenses predicated on training, oversight, and continuing accountability. If AI systems are to occupy the intimate, vulnerable space where people seek counsel in moments of crisis, a parallel architecture of certification and licensure may be necessary to safeguard the public. 

And that conversation becomes similarly urgent when these same systems, without hesitation or remorse, venture into the unlicensed practice of law, but I digress.

Emerging Questions of Responsibility

With these developments, the central question is shifting. The debate no longer turns on speculative futures but on the concrete consequences that arise when AI behaves in ways that inflict real harm. California has offered an early template for state-level supervision, and the pending litigation may prove instructive in defining the contours of liability, system design, and operational duty.

The pattern is an old one: innovation accelerates, risk accumulates, and the law is summoned to interrogate the system much as an engineer would debug a failing program, sometimes with the same hasty improvisations of gauze over a wound already showing signs of infection. No one seeks to encumber progress for its own sake. But it is hardly unreasonable to demand that our code, legal as well as digital, perform with rigor when human lives hang in the balance.

AI will continue its rapid expansion into ever more intimate corners of daily life. With that growth comes an expectation of clarity, constraint, and accountability. The technology is maturing, and with it rises the obligation to govern these systems with the same deliberation and care we bring to every other critical infrastructure of a modern society.

About the Author

Brent C.J. Britton is the Founder and Principal Attorney at Brent Britton Legal, PLLC, a law firm built for the speed of innovation. Focused on M&A, intellectual property, and corporate strategy, the firm helps entrepreneurs, investors, and business leaders design smart structures, manage risk, and achieve legendary exits.

A former software engineer and MIT Media Lab alumnus, Brent sees law as “the code base for running a country.” He’s also the co-founder of BrentWorks, Inc., a startup inventing the future of law using AI tools.

Source Acknowledgment

This article reflects commentary on developments originally reported in AIWeekly’s piece What You Missed in A.I. This Week.

Contact Me

Please share details about how I can help you!

All information will be kept confidential.

    Office

    Brent Britton Legal, PLLC
    3104 N Armenia Ave, Suite 2,
    Tampa FL 33607

     

    Email: bcjb@brentbritton.com

    Phone: +1 415-969-9933

    Check out my book

    © 2025 Brent Britton Legal, PLLC. All rights reserved.