Skip to content Skip to sidebar Skip to footer
AI, Federal And The Speed Of Change Article

By Brent C. J. Britton

Shortly after I published my last article on artificial intelligence regulation and infrastructure, the legal ground shifted again.

On December 11, 2025, the U.S. President issued an executive order asserting federal authority over the regulation of artificial intelligence. The move has been widely interpreted as an effort to unsettle emerging state initiatives, especially California’s recently enacted companion chatbot law discussed in my prior article Asimov’s Three Laws of Robotics… It is Time.. This essay does not opine as to the substantive provisions of that order or its policy objectives. My purpose today is to examine the legal uncertainty and practical instability the order introduces. (Whether the President’s assertion of federal supremacy can withstand judicial scrutiny, particularly in the absence of a comprehensive federal framework, remains an open question and one likely to be addressed by the courts.)

For AI operators and their lawyers, the uncertainty defines the environment in which decisions must be made.

Artificial intelligence has crossed a meaningful threshold. It has moved beyond experimental deployment and become a form of infrastructure. It is embedded in daily workflows and systems that influence consequential decisions. When technology reaches this level of integration, regulation follows. The process, however, rarely unfolds in a clean or linear fashion.

Instead, governance of any new technology develops unevenly. States experiment. Federal authority reasserts itself. Courts intervene later, often applying doctrines shaped in earlier eras to technologies that did not exist when those doctrines were formed. Throughout this process, technological development continues uninterrupted. The result is an extended period of instability in which rules shift and are reinterpreted.

That instability itself constitutes a form of risk.

In practice, my work as a lawyer often involves surveying a legal landscape and identifying the path of minimal risk through it. This task becomes significantly more difficult when the landscape is in motion, when statutes and executive orders evolve more rapidly than institutions can adapt. Planning under such conditions requires accepting that the terrain itself may change mid-course.

Similar acceleration is visible beyond the regulatory domain. During a recent 48-hour period, several of my friends who are experienced software engineers spontaneously announced to me that they have begun operating without staff or any other human programmers by their side. Modern AI tools now allow individual practitioners to design, build, test, and deploy complex systems while flying completely solo. This capacity reflects a relatively permanent reduction in coordination costs for the development of launch-ready software products.

Artificial intelligence is reshaping labor structures across multiple professions, including software, law, design, and analysis. The emergence of solo professionals wielding enterprise-level capabilities signals a broader revision of assumptions concerning staffing, organizational form, and scale. To say this will have repercussions for capitalism, nay, civilization itself, is an epic understatement. 

In any event, regulatory systems respond to these shifts only after they occur. Most laws are bandages, not preventative medicine.   

Railroads, electricity, aviation, and the internet each outpaced governance during their formative periods. Standards eventually emerged. Oversight stabilized. Risk became more legible. The distinguishing feature of the current moment is its velocity. The state of the art in AI advanced every week. Artificial intelligence compresses development timelines and magnifies leverage at a rate unmatched by prior general-purpose technologies. Railroads required decades to transform society. Conversational AI required a few months.

The December 11 executive order did not settle the question of how artificial intelligence should be governed. Instead, it highlighted the extent to which governance remains unresolved. Legal institutions are attempting to adapt to a system that is already reshaping the context in which legal rules are created and enforced.

Designing for uncertainty has become a practical necessity.  Regulatory frameworks are shifting in real time, and assumptions about durability require reassessment. Nothing is sacred. 

In summary, artificial intelligence continues to advance headlong into tomorrow without waiting for doctrinal clarity today. The law is evolving under pressure, mostly reactively, and without the benefit of settled consensus.

Recognizing that reality, and designing within it, now forms part of ordinary professional judgment.

About the Author

Brent C.J. Britton is the Founder and Principal Attorney at Brent Britton Legal PLLC, a law firm built for the speed of innovation. Focused on M&A, intellectual property, and corporate strategy, the firm helps entrepreneurs, investors, and business leaders design smart structures, manage risk, and achieve legendary exits.

A former software engineer and MIT Media Lab alumnus, Brent sees law as “the code base for running a country.” He’s also the co-founder of BrentWorks, Inc., a startup inventing the future of law using AI tools.

Contact Me

Please share details about how I can help you!

All information will be kept confidential.

    Office

    Brent Britton Legal, PLLC
    3104 N Armenia Ave, Suite 2,
    Tampa FL 33607

     

    Email: bcjb@brentbritton.com

    Phone: +1 415-969-9933

    Check out my book

    © 2026 Brent Britton Legal, PLLC. All rights reserved.