There are all kinds of concerns that AI progress is slowing down, particularly after a lukewarm reaction to GPT-5, but some believe that AI progress remains very much on track.
Anthropic co-founder Jack Clark says that anyone who thinks that AI is slowing down is “fatally miscalibrated”. “Five years ago the frontier of LLM math/science capabilities was 3 digit multiplication for GPT-3. Now, frontier LLM math/science capabilities are evaluated through condensed matter physics questions. Anyone who thinks AI is slowing down is fatally miscalibrated,” he said on X. He added a picture of the kind of complex problems that AI could now solve to his post.

In the comments, Clark was asked by an X user whether he would update what he’d said at the Congressional hearing on AI in June this year. “We believe that extremely powerful systems are going to be built in, you know, the coming 18 months or so. End of 2026 is when we expect truly transformative technology to arrive. There must be a federal solution here,” Clark had then said.
Jack Clark said he believed that AI was on track to meet those deadlines. “I continue to think things are pretty well on track for the sort of powerful AI system defined in machines of loving grace – buildable end of 2026, running many copies 2027. Of course, there are many reasons this could not occur, but lots of progress so far,” he said.
In his essay written in October 2024, Anthropic CEO Dario Amodei had describes how AI could create a “country of geniuses in a datacenter”, and Jack Clark believes that we’re on track to achieve this by the end of 2026. Not everyone is on board with the idea — Meta’s Yann LeCun, who is famously bearish on LLMs, has said that Amodei’s claim is “complete BS”. The relatively little progress that GPT-5 was able to demonstrate over other frontier models, such as those created by Gemini or Grok, had caused many to believe that simply scaling LLMs wouldn’t get us to superintelligence.
But if one zooms out a bit, it does seem that AI seems to be getting increasingly smarter at a fairly regular rate. GPT-2 was released in 2019, and could do little more than form sentences. GPT 3.5 was released in 2022, and could not only write perfect English, but also show some creativity, such as with poems. GPT-4, released in 2023, was able to perform increasingly sophisticated tasks. GPT-5, released in 2025, can now reason, write code, and work through complex problems.
It does seem that AI still seems to be improving rapidly, but there are a few factors which have contributed to the idea that it is slowing down. For starters, OpenAI CEO Sam Altman overpromised and under-delivered with GPT-5 — he’d shared a picture of the death star before the launch, while the actual product was only incrementally better than competition. Also, ChatGPT was released using an older model in late 2022 and GPT-4 was released a few months after that, which made people believe that the curve was steeper than it was — the model ChatGPT was old, and had been trained several months prior. And the existence of many different labs in the space is also giving the impression of incremental change — with Google, Grok, and Anthropic all constantly releasing models that are slightly better than the competition, the jump between two major model releases by a single company is never truly felt. It remains to be seen whether AI does keep improving at the pace at which has, but at least AI insiders seem to believe that it is on track to do so.