By Some Definitions, We’ve Already Achieved AGI: Anthropic Co-founder Daniela Amodei

There was a time not too long ago that tech experts were predicting when we would reach AGI, but some now believe that — at least in some respects — it’s already been reached.

In a recent discussion, Daniela Amodei, co-founder and president of Anthropic, challenged the conventional understanding of artificial general intelligence (AGI), suggesting that the goalpost many in the industry have been working toward may have already been crossed in certain domains. Her remarks point to a shift in how we should think about AI capability benchmarks, particularly as systems like Claude demonstrate proficiency that rivals or exceeds human experts in specific tasks.

“AGI is such a funny term because many years ago it was kind of a useful concept to say when will artificial intelligence be as capable as a human,” Amodei explained. “And what’s interesting is by some definitions of that, we’ve already surpassed that.”

She pointed to coding ability as a concrete example. “Claude can definitely write code better than me. It’s a low bar,” she laughed. “But Claude can also write code about as well as many developers at Anthropic now, or it can write a percentage of code as well as developers at Anthropic. That’s crazy. We probably employ some of the best engineers and developers in the world, and many of them are saying, wow, Claude is capable of doing a lot of work that I can do or extremely accelerating the work that I can do.”

Yet Amodei was quick to acknowledge the limitations that persist. “On the other hand, Claude still can’t do a lot of things that humans can do. And so I think maybe the construct itself is now wrong or maybe not wrong, but just outdated.”

Looking ahead, Amodei addressed the question of whether AI will continue advancing to even more transformative levels. “Will we get to just higher level, more powerful, transformative artificial intelligence without other breakthroughs? And I think the truth is we don’t know. The path to developing the technology is predicated on a lot of complicated mixture of science and engineering. And part of what I think is so special about these lab environments is that it’s just different approaches to kind of getting to that target.”

Despite the uncertainty, her outlook remained cautiously optimistic. “The progress doesn’t seem to be slowing down. Again, nothing slows down until it does. So it’s very possible that could happen. And I think if I were betting, I would say probably things are going to at least continue to get somewhat more capable over time. And we should be prepared for a world where that’s true.”

Amodei’s comments arrive at a pivotal moment in AI development. Recent releases from major AI labs have demonstrated increasingly sophisticated capabilities, with systems now handling complex coding tasks, scientific reasoning, and creative work at levels that would have seemed impossible just years ago. OpenAI’s GPT 5.2 model has shown remarkable improvements in mathematical and coding benchmarks, while Google’s Gemini and Anthropic’s own Claude models continue to push boundaries in reasoning and task completion.

The implications of Amodei’s perspective extend beyond semantic debates about terminology. If we’ve already achieved certain definitions of AGI in narrow domains, the focus shifts from “if” to “what next” — raising urgent questions about deployment, safety, and societal preparation. Companies like Anthropic have emphasized responsible scaling policies and safety research, but the rapid pace of capability gains suggests that businesses, policymakers, and society at large may need to accelerate their own preparations for an AI-capable future that, by some measures, has already arrived.

Posted in AI