Skip to content Skip to sidebar Skip to footer
Ugly Bags Of Mostly Water Need Not Apply

AI is dissolving the economic distinction between amateur and professional, and with it the modern idea of expertise itself. But can capitalism survive its own most efficient creation?

Over the course of a recent 48-hour period, something quietly disconcerting happened. Three seasoned software engineers independently told me the same thing: each of them had mastered contemporary AI tools to the point of being able to run a project and produce a complete, functioning work of software authorship entirely on their own. Other people, no longer required. 

“Hell is other people.” – Jean-Paul Sartre, diagnosing a particularly claustrophobic version of humanity

For most of my professional life, I have been surrounded by capable humans. Law partners, associates, paralegals, assistants, and of course IT professionals of various temperaments have formed the familiar scaffolding of my work as a student and a lawyer. But recently, having enjoyed human support and camaraderie at work for decades, I myself chose to fly solo, not necessarily due to any particular reliance on AI tools, but more as a comforting “final approach” to my career’s impending soft landing. But there is no doubt that with contemporary AI tools augmenting my work, I can research, draft, analyze, iterate, and refine at a pace and scale that once required a small team. The work is not merely faster. It is more complete, more internally consistent, more responsive, and substantially more well-researched. The fact is, my nonhuman assistants are exceedingly effective, if drastically short on personality and utterly indifferent to professional pedigree.

AI tools are, frankly, unsettlingly effective. Granted, they require my constant oversight and they are not yet capable of replacing me. Not quite yet. Increasingly, my clients now arrive in my inbox with draft contracts already attached, generated not by junior associates or well-meaning business colleagues, but by AI. At present, these documents are often horrid and broken, internally inconsistent, stylistically confused, and occasionally oblivious to the law they purport to invoke. And yet, they are unmistakably improving. Each iteration narrows the gap. What was once the product of decades of professional training is becoming difficult to distinguish, at a glance, from the output of a motivated amateur equipped with the right tools. The line between professional authorship and AI approximation is, frankly, eroding. The distinction between augmentation and usurpation is clearly temporary. See me in five minutes. 

“The most valuable asset of a 21st-century institution will be its knowledge workers and their productivity.” – Peter Drucker, speaking quaintly from the 20th century

So-called knowledge workers are becoming economically redundant, not because they lack skill or loyalty. Indeed they’re just humans being human beings. But AI systems possess something more potent in the aggregate: comprehensive access to recorded knowledge, the ability to synthesize it instantly, and the capacity to apply it to new fact patterns without fatigue, ego, or scheduling conflicts. The machine does not “know” anything in the human sense, but it can analyze, recombine, and propose future directions better, faster, and at a scale no individual human can reasonably match. 

Of course, there are dangers. AI is not intelligent, nor is it wise. It hallucinates. It insists with unnerving confidence upon falsehoods. It is unreliable at math, tone-deaf to humor, and sometimes spectacularly illogical. More subtly, I find that it demonstrates no innate sense of salience, often assigning equal weight to what matters and what merely appears to matter. These are not minor defects. In regulated, mission-critical domains like law, medicine, and finance, for example, they matter deeply.

And yet. Despite these flaws, AI tools materially increase productivity. They enable higher volume at lower cost. They inevitably thin the ranks of professions that were once thought indispensable. From a narrow economic perspective, this is a capitalist’s dream. 

From a civilizational one, it is, at best, a conundrum.

Modern capitalism presumes a large, solvent working class. Henry Ford reportedly paid his workers enough to afford the cars they were building, not out of charity, but out of systemic self-interest. Demand requires earners. Consumption requires wages, whether the product is a Model T, or a software app, or a legal contract. What happens when the very efficiencies capitalism prizes eliminate the earners themselves? One can hardly imagine an entire system of production, government, and finance being sustained solely by those fortunate enough to participate in the accelerating concentration of AI-driven wealth currently metastasizing on the world stage.

“The machine should serve the people, not the people the machine.” – 1812 letter from a Luddite, less an idealist than just a guy out of a job

Some argue that AI cannot replace human intuition, instinct, or experience. This is comforting rhetoric, but it is an anodyne masquerading as a cure. These “human” qualities are emergent computations in biological hardware; each is just a matrix of electro-chemical potentials running around in carbon-based human brains. There is no principled reason AI cannot eventually replicate them, or at least approximate them closely enough to render the distinction economically irrelevant.

So what then becomes of us? What are we to do, we “ugly bags of mostly water,” as a silicon-based lifeform memorably described humans in a 1988 episode of Star Trek: The Next Generation? How are we to withstand the AI onslaught?

One possibility is a dramatic reallocation of human effort toward roles that are relational rather than instrumental: caregiving, counseling, education, mediation, and the arts come to mind, though not, it would seem, the arts of video and media production, which actually offer clear early warning signs. AI tools, such as text-to-video systems like Sora, are rapidly “democratizing” the field, largely by erasing the economic and qualitative distinction between amateur and professional. When passable output becomes cheap, abundant, and instant, professional excellence no longer commands a premium. Entire categories of creative labor are not being replaced so much as devalued into irrelevance. So perhaps the arts won’t serve as a reliable backstop after all.

Another possibility, and perhaps the most immediate, elevates humans into supervisory roles: what we might call AI compliance officers, prompt architects, and output validators. In law, this already describes my relationship with AI tools. I curate queries, verify results, and catch the confident hallucinations. Likewise, a senior engineer reviews AI-generated code, a doctor validates diagnostic recommendations.

But this is perforce a mathematical trap. If one human can now supervise the work of ten, we’ve still eliminated nine positions. Oversight roles are real and necessary, but they’re a bridge, not a destination. And once the AI learns the oversight algorithm, the human supervisor will be obviated. Who watches the watchers?

A third path involves the expansion of leisure, creativity, and civic participation, financed by mechanisms that decouple survival from employment. Such solutions demand a radical realignment of human worth compared to market value. This requires not just policy innovation but a wholesale reimagining of what it means to contribute to society. The challenge is less technical than political, which is to say it is ultimately one of convincing those who control AI-generated wealth to share it with those it has displaced. Most capitalists bridle at the prospect of redistributing productivity gains that current systems are designed to concentrate. Their resistance is existential.

We are approaching a moment when productivity is no longer the problem, but justification is.

“The ultimate, hidden truth of the world is that it is something we make, and could just as easily make differently.” – David Graeber, antiestablishmentarian

There is no doubt that a fully deployed AI future entails significant structural change. If business processes are universally automated but no one earns enough to purchase the output, the system collapses into absurdity. If AI is integrated into modern life to its logical extreme, late-stage capitalism risks becoming a system so efficient that it consumes the very conditions that make it viable. A snake eating its own tail, if you will. Hoisted by its own petard.

Is regulation the answer? Perhaps. Human-to-machine affirmative action style employment ratios or algorithmic taxation may sound inelegant, but they may prove no more artificial than the legal fictions we already accept. Others propose universal basic income, negative income taxes, or sovereign dividends derived from AI-driven productivity itself. Additional, perhaps more “revolutionary” ideas are undoubtedly forthcoming.

None of these solutions is clean. All require political will, cultural adaptation, and a willingness to admit that the old equations no longer balance. There are, of course, many deeply dystopian possibilities as well. 

Law, software, and media are merely early examples of the same phenomenon. When tools erase the distinction between amateur and professional, markets follow status downward, not quality upward.

What is clear for now is this: we are automating not just tasks, but the economic rationale for human labor. 

The question is no longer whether ugly bags of mostly water need apply. It is what kind of civilization emerges once they no longer do, or can.

About the Author

Brent C. J. Britton is the Founder and Principal Attorney at Brent Britton Legal PLLC, a law firm built for the speed of innovation. Focused on M&A, intellectual property, and corporate strategy, the firm helps entrepreneurs, investors, and business leaders design smart structures, manage risk, and achieve legendary exits.

A former software engineer and MIT Media Lab alumnus, Brent sees law as “the code base for running a country.” He’s also the co-founder of BrentWorks, Inc., a startup inventing the future of law using AI tools.

Contact Me

Please share details about how I can help you!

All information will be kept confidential.

    Office

    Brent Britton Legal, PLLC
    3104 N Armenia Ave, Suite 2,
    Tampa FL 33607

     

    Email: bcjb@brentbritton.com

    Phone: +1 415-969-9933

    Check out my book

    © 2026 Brent Britton Legal, PLLC. All rights reserved.