LLMs Have Debunked The Idea That Consciousness Is Something Magical That Exists Outside Physics: Stephen Wolfram

While the rise of LLMs and AI has convinced some people that consciousness isn’t something that can be created with computers, it’s convinced others in the opposite direction.

Stephen Wolfram, the British-American computer scientist, physicist, and founder of Wolfram Research, has emerged as one of the most provocative voices in this debate. Known for his work on cellular automata, computational theory, and the Wolfram Language, he has spent decades exploring the computational nature of the universe itself. In a recent conversation, Wolfram made a striking claim: large language models have fundamentally undermined the mystical view of consciousness that has persisted in both popular culture and academic philosophy for centuries.

stephen wolfram

“The idea that there’s something magic that goes beyond physics leads to conscious behavior — I think that LLMs kind of put the final nail in that coffin,” he says. “Because there were all these things where it’s like, oh, maybe it can’t do this, but actually it does, and it’s just an artificial neural net.”

His argument is straightforward: for years, skeptics claimed that certain cognitive abilities—understanding context, generating creative responses, reasoning through complex problems—required something beyond mere computation. Each time an AI system demonstrates one of these supposedly unique capabilities, another brick falls from the wall separating human consciousness from mechanical processes.

But Wolfram goes deeper, questioning the very foundation of what we call consciousness. “I think our notion of consciousness is a lot related to the fact that we believe in the single thread of experience that we have,” he notes. “It’s not obvious that we should have a persistent thread of experience. In our models of physics, we are made of different atoms of space at every successive moment of time. So the fact that we have this belief that we are somehow persistent, we have this thread of experience that extends through time, is not obvious. It’s something that just happens to be the case.”

This perspective reframes consciousness not as a fundamental property of the universe, but as a useful cognitive illusion—a kind of narrative trick our brains perform. Wolfram then offers an evolutionary explanation for why this particular illusion might have emerged: “I realize that probably when animals first existed in the history of life on earth, that’s when we started needing brains. If you are a thing that doesn’t have to move around, the different parts of you can be doing different kinds of things. If you’re an animal, then one thing you have to do is decide: are you going to go left or are you going to go right? There’s a single decision you have to make, and I think it’s a little disappointing to feel that this whole thing that ends up being what we think of as consciousness might have originated in just that very simple need to decide if you are an animal that can move.”

Wolfram’s observations arrive at a pivotal moment in the AI discourse. Recent developments in large language models have demonstrated capabilities that increasingly blur the line between simulation and understanding. These systems can engage in abstract reasoning, exhibit what appears to be creativity, and even demonstrate something resembling self-reflection. Meanwhile, researchers like Yoshua Bengio and Geoffrey Hinton have publicly grappled with questions about whether current AI systems might already possess some form of consciousness, while philosophers like David Chalmers have begun reconsidering their positions on machine consciousness in light of recent advances. Wolfram’s deflation of consciousness as merely an evolutionary coordination mechanism for mobile organisms offers a third way: perhaps the question itself rests on a fundamental misunderstanding of what consciousness is. If consciousness is simply what it feels like to be a decision-making system with a unified model of itself, then the gap between biological and artificial minds may be far narrower than we imagine—not because machines have achieved something magical, but because consciousness was never magical to begin with.