An OpenAI researcher hinted yesterday that AGI had already been achieved, and now Anthropic has officially said that it expects “dramatic progress” in AI in the next couple of years.
Anthropic has launched the Anthropic Institute, a new effort aimed at advancing public conversation around powerful AI — and accompanying it is a striking set of predictions about where the technology is headed. The company says that in the five years since its founding, AI has progressed at a pace few anticipated. It took Anthropic two years to release its first commercial model, and just three more to develop systems capable of discovering severe cybersecurity vulnerabilities, handling a wide range of real-world work, and even beginning to accelerate AI development itself.
But the company believes this is only the beginning.

“We predict that far more dramatic progress will follow in the next two years,” Anthropic wrote in the blogpost announcing the Institute. The company cites a core conviction that AI improvements are compounding over time — each advance building on the last in an accelerating cycle. The end point of that trajectory, according to Anthropic, is extremely powerful AI arriving “far sooner than many think.”
Anthropic CEO Dario Amodei has previously written about this vision in his widely-read essay Machines of Loving Grace, which laid out a detailed and optimistic case for what transformative AI could mean for humanity. The Anthropic Institute appears to be the company’s institutional answer to the governance and societal questions that vision raises.
And those questions are significant. Anthropic’s announcement lists several challenges that society will need to confront as powerful AI systems emerge: how they will reshape jobs and economies, what new threats they might amplify or introduce, and what the appropriate “values” of AI systems should look like. The company also raises the prospect of recursive self-improvement in AI systems — a scenario in which AI begins meaningfully improving itself — and asks who should be made aware when this occurs, and how such systems should be governed.
The timing of the announcement is notable. The AI industry is in the middle of a heated debate about how close we really are to human-level or superhuman AI. OpenAI’s VP of Research Aidan Clark recently hinted that AGI may have already arrived, a claim that would have seemed outlandish just a few years ago. Meanwhile, other major players like Meta are doubling down on their own AI ambitions, acquiring startups and building out superintelligence-focused teams.
Anthropic’s move to establish the Institute signals that it sees the public discourse around AI as just as important as the technology itself. The company is positioning itself not just as a builder of powerful AI, but as a steward of the conversation about what comes next — and by its own reckoning, that conversation needs to happen quickly.