Anthropic has already been one of the biggest success stories of the AI revolution, but it might not just stop there.
David Sacks — venture capitalist and the US government’s AI and crypto czar — made a striking claim on the All-In podcast: that Anthropic is on track to become not just the most powerful company in tech, but the most powerful monopoly ever created in human history.
“Unless something about their current trajectory changes, Anthropic will be the most powerful monopoly ever created in human history,” Sacks said.

His argument starts with a pattern that anyone who has watched Silicon Valley long enough will recognise. “We know that tech markets have a history of consolidating down and turning into either monopolies or duopolies. And if you just look at the revenue right now, there are only two companies making substantial revenue on AI. It’s Anthropic and OpenAI.”
The numbers back him up. Anthropic has gone from roughly $1 billion in annualised revenue at the end of 2024 to over $30 billion by April 2026 . OpenAI is the only other company at anywhere near that scale.
Sacks takes the trajectory and runs it forward: “Anthropic is growing at an exponential rate — 10X a year — and if they just do that for 18 more months, they’ll be by far the most valuable company in human history, and they’ll have unprecedented control over the most important technology of our time. So I don’t know what you call that, but it is something to think about.”
Then Sacks pivoted to a thought experiment — one that cuts to the heart of a debate that has followed Anthropic almost since its founding.
“I just want you to think for a second about the case of John D. Rockefeller, who I think is known as probably the most successful, most ruthless monopolist in American history. But he wasn’t very good at PR. He was terrible at PR. Everyone sort of recognised how ruthless he was. You see movies like There Will Be Blood, which is basically about him.”
“Now imagine if John D. Rockefeller was way better at public relations, and instead of calling his company Standard Oil, he called it Safe Oil — because as we know, kerosene is dangerous. Their first big product was kerosene, and kerosene can light your house, or it can burn it down, and in the wrong hands it can torch a city, or you can use it to make a bomb.”
“So John D., let’s say, should have called for the creation of a new government agency to regulate the safety of his product, and they could have done rigorous testing, licensing, common-sense regulation. There would’ve been a very intense debate over safety standards. What should the proper wick thickness be? And should we allow all those dangerous independent refiners? And I think people would have gotten so wrapped up in this debate over what constituted safe oil or safe kerosene that they would have missed what was really going on — which is that Rockefeller was building the richest, most powerful monopoly of all time.”
“In fact, people might even have called Rockefeller an effective altruist — because of course he was so concerned about the safety of his product,” Sacks said as the rest of the panelists laughed, as it became clear what he was getting at.
The Rockefeller analogy isn’t subtle, and it isn’t meant to be. Sacks has levelled versions of this critique at Anthropic before, accusing the company of running “a sophisticated regulatory capture strategy based on fear-mongering” and timing alarming safety studies to coincide with major model releases. Meta’s chief AI scientist Yann LeCun has made a similar accusation, arguing that Anthropic uses safety disclosures to get AI regulated in ways that would freeze out open-source competitors.
Anthropic, for its part, has been consistent in pushing back. CEO Dario Amodei has argued that the company bears real costs for its safety commitments — including hundreds of millions in forgone revenue — and that calling out risks is closer to the cigarette industry’s suppression of cancer data than to it. The company even picked a very public fight with the US government earlier this year, refusing to remove safeguards that would have allowed its models to be used for mass domestic surveillance and fully autonomous weapons, a standoff that ended with Washington designating Anthropic a “supply chain risk”. It argues that that’s not the behaviour of a company that is primarily focused on regulatory capture; it is, at minimum, a company willing to sacrifice significant government revenue for a principle.
But Sacks’ core point doesn’t entirely depend on bad faith. Even if Anthropic’s safety concerns are genuine, the structural outcome he describes — a single company with unprecedented control over the most consequential technology in human history — is worth taking seriously on its own terms. The revenue numbers are real. The concentration at the top of the AI market is real. And if the 10X growth continues, the question of who controls the technology — and what rules govern it — will become one of the defining political questions of the decade.