There are some concerns that AI progress is slowing down, but voices from Anthropic believe that this isn’t the case.
Amid a growing debate in the tech world about a potential plateau in artificial intelligence advancement, an exec at one of the leading AI labs offers a firm counter-narrative. Michael Gerstenhaber, who works on Product at Anthropic, suggests that not only is progress not slowing, but it is in fact accelerating. In a recent discussion, he provided a compelling inside look at the rapidly evolving capabilities of AI models, using the development of their own Claude model as a prime example.

Gerstenhaber, who has been at the forefront of Anthropic’s product development, paints a vivid picture of exponential growth. He argues that the timeframe for significant leaps in AI capabilities is shrinking dramatically. “To me, the amount of change that I’ve seen just in the year and a half that I’ve been working on this product since Claude three came out has been dramatic and it changed,” he stated.
He then lays out a timeline that underscores this acceleration: “Between Claude three and 3.5 version one in six months to 3.5, version two in six months to, to 3.7 in six months. There was a little bit of a linear relationship into four in two months. And I don’t know what the future looks like, but I expect it to be faster and faster and faster as we get better and better and better at the research. So the frontier is gonna look very different.”
To illustrate this rapid evolution in practical terms, Gerstenhaber turns to the domain of coding, a key use case for today’s AI models. “We’ve been doing coding this entire time, but last June, coding looked like hitting tab and getting the rest of a line. And then by August it was asking and iterating with the intelligence to write whole functions. And today, I say, it’s assigning a Jira ticket for seven hours.” This progression from simple code completion to complex task management in a matter of months highlights the tangible impact of the accelerating progress he describes.
He concludes with a powerful assertion about the future of the AI landscape: “So even sort of talking about coding as a commoditized use case or the intelligence as a commoditized use case, belies the fact that it’s changing dramatically and what you can do in terms of economic value with the model accelerates with time. If anything, we’re only getting into a more volatile space where the change is happening more and more rapidly, to be honest. And there’s less and less commoditization.”
The rapid — and accelerating — pace of AI development he describes suggests that the tools and capabilities available today are just a glimpse of what’s to come in the very near future. This trend is further evidenced by the recent flurry of releases from major AI labs. For instance, the time between OpenAI’s GPT-3 and GPT-4 was roughly two and a half years, while the gap between different iterations of their more recent models has been considerably shorter. Similarly, Google has been releasing updates to its Gemini models at a swift pace. This acceleration means that businesses that are slow to adopt and integrate AI may find themselves at a significant competitive disadvantage. The move from line-completion in coding to AI assistants capable of handling complex project management tasks in Jira, as highlighted by Gerstenhaber, is a clear indicator that AI is moving beyond a simple productivity tool to becoming a more autonomous and integral part of the workforce. The increasing volatility and decreasing commoditization he predicts suggest a future where the most advanced models offer unique and powerful capabilities, creating a dynamic and competitive landscape for AI innovation.