Most leaders in tech are giving predictions on when AGI will be reached, but others are being more conservative — they aren’t assuming that reaching AGI is a given.
Adding a significant voice to the more measured side of the Artificial General Intelligence (AGI) debate, Google and Alphabet CEO Sundar Pichai recently offered a candid perspective. In an interview with Bloomberg, while expressing optimism about ongoing AI progress, Pichai acknowledged the profound uncertainties that still cloud the path to AGI, suggesting that current methodologies, despite their rapid advancements, might not be sufficient to reach this much-anticipated technological milestone.

When asked directly if it’s possible that AGI might not be reached with current approaches, Pichai’s response was clear: “Oh, it’s entirely possible.”
He elaborated on his view of the current AI landscape, balancing enthusiasm with a dose of realism. “Everything we can see, I feel very positive there’s a lot of forward progress ahead with the paths we are on – not only the set of ideas we are working on today but also some of the newer ideas we are experimenting with,” Pichai stated. “So I’m very optimistic about seeing a lot of progress.”
However, this optimism is tempered by an understanding of historical technological development. “But you’ve always had these technology curves where you may hit a temporary plateau,” he cautioned. This led him to question the certainty of the current trajectory: “So, are we currently on an absolute path to AGI? I don’t think anyone can say for sure.”
Pichai acknowledged the current breakneck speed of development in the field. “The pace of progress is staggering, and looking ahead, I sense we will continue to have that pace of progress,” he conceded, before reiterating the potential for unforeseen obstacles: “But there could be limitations in the technology.”
To illustrate the gap between current AI capabilities and true generalized intelligence, Pichai pointed to a common paradox: AI systems can perform incredibly complex tasks yet falter on seemingly simple ones. “The technology currently feels like you’re seeing dramatic progress, but then there are areas where this thing can’t do this obvious thing,” he explained. He offered a compelling analogy from Google’s own sister company, Waymo, which develops autonomous driving technology: “For example, Waymo is doing very, very well, but remember you can teach a kid to drive in about 20 hours. So, while the technology is amazing, we are also quite far from a generalized technology.”
Pichai’s comments are interesting, coming from the leader of a company that is a powerhouse in AI research and development, investing billions annually in the field and currently has the best AI models in the world. His cautious tone suggests that despite the impressive capabilities of models like Google’s own Gemini, the fundamental breakthroughs required for AGI – intelligence that can understand, learn, and apply knowledge across a wide range of tasks at a human level or beyond – may still require new paradigms or discoveries that go beyond scaling up existing architectures.
This perspective aligns with a growing discourse within the AI community about the potential limitations of current deep learning techniques, particularly large language models (LLMs). While LLMs have demonstrated remarkable abilities in natural language processing, code generation, and even some forms of reasoning, critics and researchers point out their reliance on vast datasets, their susceptibility to “hallucinations” (generating incorrect information), and their struggles with robust common-sense reasoning and true understanding. The debate continues as to whether scaling these models further will overcome these hurdles or if entirely new approaches are needed. Meta AI Chief Yann Lecun, in particular, has been repeatedly saying that the LLM paradigm won’t get humanity to AGI.
Pichai’s acknowledgement of potential plateaus and technological limitations serves as a crucial reminder that the journey to AGI is not merely an engineering challenge of building bigger models, but a profound scientific quest. It underscores the importance of continued fundamental research alongside applied development. As the tech world eagerly anticipates the next leap in AI, the Google CEO’s pragmatism highlights that the path ahead is likely to be complex, potentially non-linear, and filled with both staggering progress and significant, yet-to-be-solved, challenges.