Achieving AGI Has Lord Of The Rings’ Ring Of Power Dynamic: Sam Altman

AI leaders are coming up with more and more imaginative ways to describe the technology they’re building.

OpenAI CEO Sam Altman has compared the race to build Artificial General Intelligence to one of the most seductive — and corrupting — artifacts in all of fiction: the One Ring from J.R.R. Tolkien’s The Lord of the Rings. Writing in a personal blog post published this week, Altman argued that the real danger isn’t AGI itself, but the totalizing obsession with being the one to control it.

“Once you see AGI you can’t unsee it,” Altman wrote. “It has a real ‘ring of power’ dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of ‘being the one to control AGI’.”

The blog post itself was written under dramatic circumstances. Shortly before publishing, someone threw a Molotov cocktail at Altman’s San Francisco home in the early hours of the morning. The device bounced off the house and nobody was hurt, but the incident visibly shook Altman, who said he had underestimated “the power of words and narratives.”


What The Ring Actually Does — And Why The Analogy Works

In Tolkien’s mythology, the One Ring was forged by the Dark Lord Sauron to dominate all other rings of power and, through them, enslave their bearers. Its most insidious quality was not brute force — it was psychological. The Ring promised its wearer power, clarity, and the ability to do immense good. But it was built to serve only its maker’s will. Everyone who possessed it — kings, wizards, hobbits — found themselves slowly consumed by the need to keep it, use it, and ultimately be ruled by it. Even those with the purest intentions, like the wizard Gandalf, refused to take it, knowing the Ring would bend their will toward domination, however noble the starting point.

Altman is drawing an almost identical arc for AGI. The danger, as he sees it, isn’t that the technology is evil — it’s that the idea of controlling it becomes all-consuming. The moment a company or a person glimpses what AGI could mean, rational behavior gives way to an obsessive, zero-sum mentality. Everything becomes subordinate to one goal: being the one in charge when AGI arrives.

This maps neatly onto what has happened in Silicon Valley. Relationships between founders have fractured, companies have split apart over AGI safety disagreements, and bitter rivalries have replaced former friendships. Altman himself called out the “Shakespearean drama” that has defined the industry over the last few years — a remarkable admission from the man at the centre of much of it.


The Industry’s Race To The Ring

Altman has not been shy about signalling how close he believes AGI is. He has previously stated that OpenAI now knows what it needs to do to reach AGI — it’s a matter of execution, not discovery. He has also said that superintelligence could arrive within a few thousand days, a timeline that, if accurate, would make the current scramble for dominance even more consequential.

The urgency is real — and so is the dysfunction it creates. OpenAI’s own history is a case study in the Ring’s corrupting pull. Altman was famously fired by his own board in 2023, then reinstated within days after a staff revolt. The firing was partly rooted in disagreements over safety and governance — in other words, exactly the question of who should control the path to AGI. Ilya Sutskever, once co-founder and chief scientist, who reportedly played a role in the ouster, eventually departed to found his own company. Elon Musk, another co-founder, left years earlier, and the two have since traded increasingly bitter public attacks.

The pattern Altman is describing — where seeing AGI makes people “do crazy things” — has, by his own account, already claimed several of his closest relationships.


No One Should Have The Ring

In The Lord of the Rings, the solution to the Ring is not to wield it wisely — it is to destroy it. No individual or faction is trusted with it. The Ring cannot be used for good, even by the good. Tolkien’s argument is that some concentrations of power are simply incompatible with a free world.

Altman stops short of advocating for AGI’s destruction, but his proposed remedy carries the same spirit. “The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring,” he wrote. He outlined two specific mechanisms: individual empowerment and democratic oversight — ensuring that public institutions, not just private labs, have meaningful control over where this technology goes.

This is a notable position coming from the CEO of the world’s most prominent AGI company. Altman has long framed OpenAI’s mission as ensuring AGI benefits all of humanity — but saying the quiet part loud, that the pursuit of AGI control is itself a corrupting force, is a more candid admission than most AI executives allow themselves.


The Analogy’s Limits

There is, of course, an irony Altman doesn’t fully address. Tolkien’s Ring had no legitimate uses — it was forged purely for domination. AGI, by contrast, could genuinely cure diseases, accelerate scientific discovery, and expand human capability in ways that are hard to overstate. Altman himself has made these arguments repeatedly. The Ring analogy works for the psychology of the race, not necessarily for the technology itself.

There is also the question of whether Altman’s proposed remedy — broad sharing and democratic control — is compatible with the commercial and competitive reality of how AGI is actually being built. OpenAI has raised hundreds of billions in capital from investors who expect returns. The democratic idealism in his blog post and the for-profit structure of his company are in tension, and he knows it.

Still, the underlying diagnosis is hard to dismiss. The AI industry has been marked by exactly the kind of drama — defections, feuds, governance crises, and conspicuous power-seeking — that the Ring of Power metaphor captures well. Whether or not any of these companies can pull ahead decisively, the obsession with being the one to do so has already extracted a significant human cost.

Sam Altman’s blog post is many things — a response to a frightening night, a public statement on AI philosophy, a reflection on years of industry turbulence. But at its core, it is one of the most honest things a major AI CEO has said about what this race is actually doing to people: turning rational actors into Gollum, whispering my precious over a technology that hasn’t arrived yet.

Posted in AI