People Think LLMs Are Different From Us, But They’re Very Like Us: AI Godfather Geoffrey Hinton

LLMs are thought to be a completely different kind of technology, but they might be quite similar to a technology we’ve known forever — our own.

Dr. Geoffrey Hinton, the Turing Award laureate widely regarded as the “Godfather of AI,” offers a perspective that is as profound as it is simple: when we look at Large Language Models (LLMs), we are looking at a reflection of ourselves. Hinton, whose pioneering work on neural networks laid the foundation for the current AI revolution, suggests that our understanding of these systems is shallow, leading to a misconception about their nature. He argues that instead of being fundamentally different, the way these models ‘understand’ and process language is the best model we currently have for how the human brain does the same.

“People have used ChatGPT, Gemini, and Claude, so they have some sense of what they can do,” he begins, “but they understand very little about how they actually work. They still think that AI is very different from us.”

His central message is a call to shift our perspective. “I think it’s very important for people to understand that it’s actually very like us,” Hinton states.

Hinton takes particular aim at the field of linguistics, which has historically proposed its own theories for human language acquisition and understanding. “Our best model of how we understand language is these large language models,” he declares. “Linguists will tell you, ‘No, that’s not how we understand language at all.’ They have their own theory that never worked; they never could produce things that understood language using their theory. They basically don’t have a good theory of meaning.”

The key distinction, according to Hinton, lies in the mechanics of meaning. Traditional theories often grapple with abstract rules and symbols. In contrast, the AI models that power today’s chatbots and creative tools use a different, more effective method. “These neural nets use large feature vectors to represent things,” Hinton explains. “It’s a much better theory of meaning.” In essence, these models represent words and concepts as complex arrays of numbers (vectors) that capture a vast web of relationships and contextual nuances—a process that might be closer to the synaptic activity in our brains than we realize. Hinton concludes with a call for deeper public discourse: “I wish the media would go into more depth to give people an understanding.”

Hinton’s statement carries significant weight, not least because of his recent, highly publicised concerns about the potential long-term risks of advanced AI. His departure from Google in 2023 was driven by a desire to speak freely about these dangers. Yet, his warning is not based on the idea that AI is an alien monster. On the contrary, it’s rooted in the belief that its intelligence is so analogous to our own that it will inevitably learn, reason, and potentially, develop goals of its own. This perspective reframes the AI safety debate from preventing a “Terminator” scenario to managing a new kind of intelligence that shares our cognitive architecture, complete with emergent capabilities and unpredictable behavior. The “hallucinations” of LLMs, for instance, are often compared to human confabulation—our own tendency to fill in memory gaps with plausible, yet false, information. As AI becomes more integrated into our economic and social structures, from automating intellectual labor to shaping public opinion, Hinton’s call to understand its fundamental similarity to us is not just an academic point—it’s a crucial step in navigating a future where we are no longer the only intelligence shaping the world.

Posted in AI