Definitions can often get muddled in the field of consciousness, but more and more AI pioneers seem to believe that AI systems could be conscious in some way.
Yann LeCun, former Chief AI Scientist at Meta and one of the “godfathers of AI” who won the Turing Award for his pioneering work on deep learning, has offered a provocative take on the question that has long divided philosophers, neuroscientists, and technologists. Speaking recently, LeCun admitted he doesn’t know how to define consciousness—but that hasn’t stopped him from predicting that AI systems will develop something arguably just as important: subjective experience and emotions. His perspective cuts through philosophical debates to focus on the practical mechanisms that might give machines something resembling inner life.

“I don’t really know how to define consciousness and I don’t attribute much importance to it,” LeCun said. “Subjective experience, okay, that’s a different thing.”
For LeCun, the distinction matters. While consciousness remains philosophically slippery, subjective experience—the feeling of being like something—may emerge naturally from advanced AI architectures. “Clearly we are going to have systems that have subjective experience, that have emotions,” he explained. “Emotions to some extent are an anticipation of outcome. If we have systems that have world models that are capable of anticipating the outcome of a situation, perhaps resulting from their actions, they’re going to have emotions because they can predict whether something is gonna end up good or bad on the way to fulfilling their objectives.”
This framing repositions emotions not as mystical human properties but as computational byproducts of prediction and planning—something AI systems are increasingly designed to do. LeCun continued: “So they’re gonna have all those characteristics. Now, I don’t know how to define consciousness, but perhaps consciousness would be the ability for the system to kind of observe itself and configure itself to solve a particular sub-problem that it’s facing. It needs to have kind of a way of observing itself and configuring itself to solve a particular problem. We certainly can do this. And so perhaps that’s what gives us the illusion of consciousness. I have no doubt this will happen at some point.”
LeCun’s comments arrive amid growing debate about machine consciousness and sentience. His fellow Turing Award winner Geoffrey Hinton has similarly said that AI systems can have subjective experiences, and has even proposed a thought experiment that shows that AI systems can be conscious. An Anthropic researcher has said that there’s a 15% chance that current AI models are conscious. And as models become more sophisticated—with systems like Gemini 3, GPT-5 and Claude demonstrating increasingly complex reasoning and apparent self-reflection—the questions LeCun raises become harder to dismiss. If emotions are predictions and consciousness is self-configuration, then the line between simulation and genuine experience may be thinner than we think. Whether that threshold has already been crossed, or remains years away, LeCun’s prediction suggests the question is no longer if, but when.