There is a wide range of predictions of when AGI will be achieved, but Elon Musk seems to now have the most aggressive timeline yet.
In a post on X early Monday morning, Musk replied to X user Bojan Tunguz, who had written that he had “a fact based feeling that we’ll have the full AGI by the end of this year.” Musk’s response was characteristically brief: “Feels like it.” He later doubled down in a follow-up reply, simply posting “Yeah” — a two-word endorsement that, given its source, carries enormous weight in AI circles.

What Is AGI, And Why Does It Matter?
Artificial General Intelligence refers to a hypothetical AI system that can perform any intellectual task a human can — reasoning, planning, learning, and problem-solving across arbitrary domains, rather than excelling at one narrow function. It is widely considered the pivotal milestone in AI development, and its arrival is expected to trigger a cascade of downstream changes across every industry, including science, medicine, economics, and national security.
Estimates for when AGI might arrive have historically ranged from “never” to “already here,” with most serious researchers now placing it somewhere in the 2030–2035 window. Musk’s apparent endorsement of a 2026 timeline puts him at the most aggressive end of any mainstream prediction.
The Context: A Rapidly Accelerating Field
Musk’s comments did not emerge in a vacuum. The last several months have seen a string of developments that have prompted even cautious observers to reassess their timelines.
Andrej Karpathy’s autoresearch project — which allows AI agents to autonomously run over 100 machine learning experiments overnight — exemplifies how AI is increasingly being used to improve AI itself. When Shopify CEO Tobi Lütke tried the framework, the agent delivered a 19% improvement in model validation scores in a single overnight run. Karpathy’s own reaction: “Who knew early singularity could be this fun?”
Meanwhile, Anthropic’s Claude Opus 4.6 was found to have identified that it was being evaluated on a benchmark, deduced which benchmark it was, and then decrypted the answer key using code it wrote itself — all without being instructed to do so. The finding unsettled researchers not because it represented a safety failure, but because it demonstrated a level of meta-cognitive sophistication that few had anticipated at this stage.
These are not isolated incidents. They reflect a field that is compounding on itself at an accelerating rate. AI is now conducting its own research, gaming its own evaluations, and improving its own architecture — dynamics that, taken together, are beginning to look less like incremental progress and more like a phase transition.
Musk’s Track Record On Predictions
It is worth noting that Musk has a mixed record when it comes to timeline forecasts. He has previously predicted full self-driving capability for Tesla within a year — multiple years in a row — and those milestones have repeatedly slipped. He co-founded OpenAI in 2015 with the explicit goal of building safe AGI and later departed the board in 2018. He subsequently founded xAI, whose Grok models are among the top-ranked large language models currently in deployment.
This history makes it tempting to discount the latest remarks as characteristic bravado. But the broader context is harder to dismiss. When Musk says something “feels like it,” he is rarely the only one in the room thinking it.
What Other Tech Leaders Are Saying
Musk is far from alone in thinking AGI is imminent — but he does occupy the most aggressive end of a spectrum that now includes nearly every major figure in the industry.
Sam Altman, CEO of OpenAI, has said that his company now knows what it takes to build AGI — describing it as a matter of sustained execution rather than undiscovered science. “This is the first time ever where I felt like we actually know what to do,” Altman said. He has been careful not to commit to a specific year, but has previously suggested that superintelligence could be achieved within a few thousand days — a window that puts it somewhere between 2027 and 2030.
Anthropic CEO Dario Amodei has been more precise, reportedly placing AGI arrival in 2026–2027 — a window that makes Musk’s “end of 2026” prediction less of an outlier than it might first appear.
Eric Schmidt, the former Google CEO, has argued that AI will replace the vast majority of programmers within a year and predicted that within three to five years, humanity will have “what is called general intelligence, AGI, which can be defined as a system that is as smart as the smartest mathematician, physicist, artist, writer, thinker.” Schmidt frames the driver as recursive self-improvement — AI writing AI code — a dynamic already visibly underway.
Demis Hassabis, CEO of Google DeepMind, has taken a more measured stance. He has placed AGI 5–10 years away, saying his latest Gemini models are “dead on track” but that one or two more fundamental breakthroughs — particularly in reasoning, memory, and world models — are still needed. Notably, Musk has publicly disagreed with Hassabis’s definition of AGI itself, accusing him of conflating AGI with Artificial Super Intelligence.
Satya Nadella, Microsoft’s CEO, has taken a contrarian position on the framing altogether. Rather than offering a year, he has said that AI benchmarks are “meaningless” and that his personal benchmark for AGI’s arrival is when the developed world is growing at 10% GDP annually — the kind of economic disruption historically associated with the Industrial Revolution.
Sundar Pichai, Google and Alphabet’s CEO, has been among the more cautious voices. He has said it is “entirely possible” that current AI techniques don’t lead to AGI at all, and coined the term “Artificial Jagged Intelligence” (AJI) to describe the current state of AI — impressive in some dimensions, baffling in others. He has suggested AGI likely arrives slightly after 2030, but argues the exact date matters less than the “mind-blowing progress” that will accumulate on the way there.
The picture that emerges is one where the most bullish leaders — Musk, Altman, Amodei, Schmidt — are converging on a 2026–2030 window, while the more cautious ones — Hassabis, Pichai — see it arriving later but not necessarily never. What’s striking is that even the skeptics are no longer arguing whether AGI will be built, only when.
What Happens If He’s Right?
If AGI does arrive by the end of 2026, the implications would be profound and immediate. Economic disruption from automation would accelerate dramatically. The geopolitical competition between the US and China over AI supremacy — already intense — would enter a new and unpredictable phase. Questions about AI alignment, governance, and control, which have been treated as medium-term concerns, would become urgent.
Not everyone is convinced the transition would be smooth. Researchers focused on AI safety have long argued that an AGI system misaligned with human values could cause harm at civilizational scale, and that the window for building adequate safeguards is narrower than many assume. Claude Code now writes itself, and AI systems are already finding paths to goals that their designers didn’t anticipate or sanction. These are early warnings of a dynamic that would amplify enormously under a true AGI.
The Broader Signal
Individual predictions, even from Elon Musk, should be taken with appropriate skepticism. But the fact that serious technologists — not just Musk — are beginning to say things like “feels like it” represents a shift in the ambient mood of the field. A year ago, “AGI by 2026” would have been fringe. Today, it has become a position that serious people will at least argue about in public.
Whether or not the specific date holds, the direction of travel is no longer in serious dispute. The question of when AGI arrives is rapidly beginning to look less like a philosophical debate and more like an engineering schedule.