Things Seem To Be Going Somewhat Slower Than The ‘AI 2027’ Scenario: Daniel Kokotajlo

Even as Google’s Gemini 3 and Nano Banana Pro models are hogging all the limelight for their capabilities, it appears that AI progress might actually be slowing down from just over a year ago.

Daniel Kokotajlo, a former OpenAI governance researcher and lead author of the influential “AI 2027” scenario document, has publicly updated his artificial general intelligence (AGI) timeline forecasts, now suggesting arrival around 2030 rather than his previous median of 2027-2028.

“Yep! Things seem to be going somewhat slower than the AI 2027 scenario,” Kokotajlo wrote on X. “Our timelines were longer than 2027 when we published and now they are a bit longer still; ‘around 2030, lots of uncertainty though’ is what I say these days.”

The AI 2027 Project

The AI 2027 project, published earlier this year, presented a detailed scenario exploring what the world might look like if AGI arrives by 2027. The document walks through potential social, economic, and governmental responses to transformative AI systems, providing concrete speculation about how such a transition might unfold. The project, along with an interactive website, had gone viral for its detailed analysis of how AI might progress until AGI is reached.

However, the project’s title has now generated confusion about its authors’ actual predictions. As Kokotajlo clarified in follow-up posts, the 2027 date was never meant as a firm prediction but rather represented the mode (most likely single year) rather than the median of their probability distributions.

The Updated Forecasts

The revision comes alongside new data suggesting AI capabilities may be advancing more gradually than some optimistic scenarios predicted. In response to a graph tracking the length of coding tasks AI agents can complete autonomously, Kokotajlo noted that recent results from models like GPT-5.1 Codex-Max and various Claude versions show progress, but not at the exponential pace that would validate the most aggressive timelines.

A chart showing median AGI forecasts over time from both Kokotajlo (“Daniel”) and another forecaster Eli Lifland (“Eli”), a co-author on the project, illustrates this recalibration. While Kokotajlo’s forecast dropped dramatically from 2070 in 2018 to around 2027 by 2023-2024, his latest 2026 forecast shows a slight uptick back to 2030.

Particularly interesting was Kokotajlo’s comment about Google’s upcoming Gemini 3 release: “I eagerly await Gemini 3’s METR horizon length results. Based on GPT 5.1 codex etc, I don’t expect it to be high enough to salvage the OG AI2027 trendline. It might not even be higher than GPT 5.1 codex-max.” The METR benchmark measures how long AI models can autonomously operate for, which in the AI 2027 model is key to automating AI research and building AGI.

Addressing the Confusion

The shift in timelines sparked some criticism and confusion online, with observers questioning why the project retained its “AI 2027” branding if the forecasts had changed. Kokotajlo responded with a detailed clarification thread explaining the project’s original intent and methodology.

“Some people are unhappy with the AI 2027 title and our AI timelines. Let me quickly clarify,” he wrote, explaining that all authors considered 2027 to have at least a 10% probability at publication, and that 2027 or 2028 was the single most likely year in their probability distributions.

For Kokotajlo personally, AGI by end of 2027 was approximately 40% probable when they published—meaning it was not his median forecast even at the time. “Why did we choose to write a scenario in which AGI happens in 2027, if it was our mode and not our median? Well, at the time I started writing, 2027 was my median, but by the time we finished, 2028 was my median,” he explained.

The authors emphasized that the scenario’s purpose wasn’t to argue AGI would happen in a specific year, but rather to explore what near-term AGI would concretely look like: “What would that even look like, concretely? How would the government react?”

Despite the timeline extension, Kokotajlo stressed that the core concerns driving the AI 2027 project remain unchanged. The team remains confident that AGI and artificial superintelligence (ASI) will eventually be built and might arrive soon, that ASI will be wildly transformative, and that current preparedness efforts are inadequate.

“We’re confident that: 1. AGI and ASI will eventually be built and might be built soon 2. ASI will be wildly transformative 3. We’re not ready for AGI and should be taking this whole situation way more seriously,” he wrote.

Kokotajlo indicated that the AI 2027 team will soon publish updated timelines and takeoff models, along with explanations of how and why their views have evolved. “The tl;dr is that progress has been somewhat slower than we expected & also we now have a new and improved model that gives somewhat different results,” he noted.

The team has already added a disclaimer to the AI 2027 landing page to address ongoing confusion about their forecasts.

Implications

The recalibration from forecasters like Kokotajlo may influence how the AI industry and policymakers think about preparedness timelines. While a shift from 2027 to 2030 may seem modest, it could significantly affect decisions about regulatory frameworks, safety research priorities, and infrastructure investments. There is also skepticism from researchers like Yann LeCun, who’ve been saying that LLMs aren’t a path to AGI, and new technical breakthroughs will be required to get there. And there are other researchers like Fei-Fei Li, who’ve said that AGI is more of a marketing term than a concrete scientific goal. But given how advanced AI could end up harming humanity in ways that we can’t exactly foresee at the moment, it is useful to have people think and debate about how exactly AI will progress — and how it will impact humanity — in the coming years.

Posted in AI