Most discussions on whether AI is currently in a bubble center around scaling laws, and whether current techniques will get us to AGI, but Geoffrey Hinton has an interesting new frame on the phenomenon.
The 77-year-old cognitive psychologist and computer scientist, often called the “Godfather of AI” for his pioneering work on neural networks, recently offered a provocative take on the AI bubble question. Unlike most commentators who focus on technical limitations or market valuations, Hinton suggests the real bubble might stem from a blindspot in how companies are calculating returns: they’re not accounting for the social upheaval their investments will trigger.

“There’s kind of two senses of AI bubble,” Hinton explained. “There’s the sense that old-fashioned symbolic AI people often raise, that all this stuff is just hype. It doesn’t really understand, it won’t be able to do what people claim it’s going to be able to do. I don’t think at all that it’s a bubble in that sense.”
Hinton is unequivocal about AI’s technical capabilities. The technology is “really doing a lot,” he noted. “It still makes mistakes, some things it’s not very good at still, but it’s getting better rapidly all the time. So there’s not a bubble in the sense that the technology’s not going to work.”
But there’s another kind of bubble, Hinton warns. “There may be a bubble in the sense that people aren’t going to get the money back on their investments, because as far as I can see, the reasons for this huge investment is the belief that AI can replace people in lots of jobs, or it can make people much more efficient. So you’ll need far fewer people using AI assistance.”
Here’s where Hinton identifies the critical miscalculation: “I don’t think people are factored in enough, the massive social disruption that will cause. So they’re assuming everything else is going to proceed as normal. We’ll replace lots of workers, companies will make bigger profits, and they’ll pay us a lot for the AI that does that.”
The problem, according to Hinton, lies in the assumption of business-as-usual. “If you do get huge increases in productivity, that will be great for everybody if the wealth was shared around equally. But it’s not going to be like that. It’s going to cause huge social disruption.”
Hinton’s warning arrives at a moment when the AI industry’s economic assumptions are beginning to face scrutiny. Major tech companies have poured hundreds of billions into AI infrastructure, with Microsoft, Google, Amazon, and Meta collectively planning over hundreds of billions of dollars in capital expenditures. The business case typically rests on productivity gains and labor cost reductions. But Hinton suggests these projections may be fatally flawed because they treat social stability as a constant rather than a variable.
Recent developments hint at the disruption Hinton anticipates. Hollywood writers and actors went on strike in 2023 partly over AI concerns. Customer service companies are already deploying AI agents at scale, displacing thousands of workers. Our AI layoffs tracker currently estimates that nearly 190,000 jobs have been lost because of AI. Meanwhile, political movements questioning technological change are gaining traction globally, from AI regulation in the EU to growing calls for universal basic income as a response to automation.
The implications are stark: if rapid AI-driven unemployment creates political instability, regulatory backlash, or consumer demand collapse, the returns tech companies expect may never materialize. Companies might build the most powerful AI systems in history only to find the social and political environment has shifted so dramatically that deploying them becomes untenable, or that the displaced workers who were supposed to become customers can no longer afford the products. In this scenario, the bubble wouldn’t burst because the technology failed, but because investors underestimated how much society would need to change—and resist changing—to accommodate it.