Thoughts on Dwarkesh’s essay on fully automated firms: 8 implications of a post AGI world

When expertise becomes copy-paste, the architecture of civilization will no longer built around humans. It’ll be around the servers and power grids that make infinite intelligence possible.

Dwarkesh Patel’s excellent recent essay lays out what happens when AGI can run firms without human input. As he puts it, “Everyone is sleeping on the collective advantages AIs will have, which have nothing to do with raw IQ but rather with the fact that they are digital—they can be copied, distilled, merged, scaled, and evolved in ways humans simply can’t.” But as I read it, I kept thinking: the real shift won’t be inside companies. It’ll be everywhere else.

This essay distills my thoughts into 8 key implications, exploring how AGI will reshape not just companies but culture, governance, and even our definition of “progress.”

Implication 1: Infinitely replicable talent shifts scarcity from labour to compute

Dwarkesh describes this vividly: “What if Google had a million AI software engineers? Not untrained amorphous ‘workers,’ but the AGI equivalents of Jeff Dean and Noam Shazeer, with all their skills, judgment, and tacit knowledge intact.”

But infinite talent doesn’t guarantee infinite growth. The real bottlenecks will be compute, energy, and intellectual property.

Imagine an AI firm with a million “employees,” throttled because Taiwan’s chip fabs hit capacity, or an energy crisis triples data center costs. Entire industries will reorganize around these bottlenecks. Nations controlling next-gen chip fabs and cheap energy will form the new power blocs, sparking geopolitical tensions.

The paradox? We’ll have infinite talent. But not enough power to deploy it.

Implication 2: Frictionless knowledge exchange creates AI hiveminds

AI agents share data instantly, eliminating the inefficiencies that plague human organizations. Teams? Departments? Outdated concepts. Instead, think of firms as vast neural networks — fluid, decentralized, hyper-efficient.

But even perfect systems splinter. Data drifts, conflicting objectives, or misaligned code updates could fracture unity. Less Star Trek "Borg Collective, more corporate Game of Thrones with algorithms meta-plotting behind the scenes.

Even the most synchronized systems drift over time. Alignment is a moving target.

Implication 3: Organizations evolve at hyper-speed. Until they crash.

AI-led companies can test thousands of ideas simultaneously, iterating at breakneck speed. It’s evolution on fast-forward. Best practices don’t spread, they replicate.

But hyper-speed cuts both ways. Imagine a critical bug propagating across a trillion processes before anyone notices. The fallout wouldn’t be a product recall or a bugfix; it’d be an economic earthquake.

Sometimes, slow is a feature, not a bug.

Implication 4: Forget oil barons. Enter electron tycoons.

With labour effectively infinite, the new scarce resource is energy. GPUs, chips, electricity — they become the lifeblood of AGI economies.

Expect a surge in renewable and nuclear investments to power data centers. But also: resource wars, energy monopolies, and nations vying for chip supremacy. The future isn’t “Big Tech vs. Governments.” It’s whoever owns the electrons.

Silicon is the new oil. Energy is the new gold.

Implication 5: The collapse of corporate hierarchies

If every “employee” is an AI clone perfectly aligned with a central system, traditional corporate hierarchies crumble. No middle managers. No executive egos. Just pure optimization.

Ronald Coase argued that firms exist to minimize transaction costs. But in AGI-run firms, where communication is instantaneous and perfectly aligned, the boundaries of the firm could dissolve entirely. What’s left isn’t a company — it’s a self-optimizing organism.

In a world without middle managers, who manages the machine?

Implication 6: Distillation wars — when intellectual property becomes intelligence property

As AI replication becomes trivial, legal battles over “model distillation” will explode. Forget corporate espionage as we know it. The future’s heist movies will be about stealing minds, not data.

Black-market AGI clones. Espionage targeting model weights. Regulatory arms races to control not just information but cognitive assets.

If information wants to be free, AGI wants to be everywhere.

Implication 7: The paradox of productivity. Who buys what infinite AI produces?

If AGI replaces human jobs en masse, who’s left to buy the products? Hyper-productivity creates a demand crisis. Capitalism’s dirty secret is that it relies on people having both jobs and purchasing power.

Universal Basic Income? Data dividends? Corporate-sponsored consumer subsidies? It’s all on the table. When firms are too efficient for their own good, the economy starts to cannibalize itself.

What happens when the economy is too productive for its own good?

Implication 8: The human rebellion. A nonviolent revolt against ubiquitous AI

Efficiency is seductive — until it’s suffocating. Expect a backlash: “human-only” services, artisanal goods, slow fashion, local autonomy. Not because it’s practical, but because it’s meaningful.

The ultimate luxury in an AGI-dominated world won’t be convenience. It’ll be imperfection. Struggle. Craftsmanship. Things made slowly, by hand, with love.

In the end, humanity’s greatest feature might be that we’re inefficient.

Dwarkesh mapped out how AGI can run companies. But zoom out, and the lines blur. These aren’t just corporate shifts: they’re civilizational ones.

Humanity isn’t just building smarter companies. We’re building something stranger: a world that is hyper-optimized.

The question isn’t just what AGI will do. It’s what we’ll become in response.

--

If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.