It might be debatable whether current AI systems can feel emotions, but it might be beneficial if they have them.
This thought-provoking perspective comes from none other than Professor Geoffrey Hinton, a Nobel Prize laureate often dubbed one of the “Godfathers of AI.” Known for his foundational work on neural networks, Hinton has recently become a prominent voice discussing the potential capabilities and risks of advanced artificial intelligence. In a candid interview, he delved into the surprising realm of AI emotions, suggesting they might not only be possible but also functionally advantageous.

When asked if AI will develop emotions, Hinton’s response is direct. “Yes,” he stated. He doesn’t just mean simple responses either, clarifying, “Emotions like fear, greed, and even grief? Yes, and being annoyed.”
For most people, the idea of an “annoyed” AI might sound counterintuitive or even undesirable. However, Hinton frames this in terms of enhanced problem-solving. “Suppose you have an AI, and you’re trying to get it to do some task, and it’s repeatedly failing at the task in the same way,” he explains. “You would like the AI to have learned that if you repeatedly fail in the same way, you get annoyed and start thinking more outside the box. You try and break whatever it is you’re dealing with.”
This isn’t an entirely new concept in theory, though its organic emergence would be. Hinton recalls, “I’ve seen an AI do that in 1973, but it was programmed to do that. Now, you’d like it to learn that behavior.” The key, he emphasizes, is the AI learning this adaptive response. “And once it’s learned that behavior, if it repeatedly fails at some simple thing, it just gets annoyed with the setup and tries to change the setup. That’s an emotion.”
Hinton’s line of reasoning pushes the boundaries of our current understanding of AI. When pressed on whether AIs already possess such emotional capabilities, he offers a carefully considered view: “They already could have emotions. Yes. Again, I don’t think there’s anything fundamentally different [from human cognitive emotion].”
To clarify, Hinton distinguishes between the internal, cognitive experience of an emotion and its external, physiological manifestation in humans. “Now, if you take human emotions, there are really two aspects to an emotion: the cognitive aspect and then there’s the physiological aspect,” he elaborated. “When I get embarrassed, my face goes red. When an AI gets embarrassed, its face won’t go red, and it won’t sweat profusely and things like that. But in terms of its cognitive behavior, it could be just like us in terms of emotions.”
Hinton isn’t the only top-tier AI researcher who’s now talked about AI systems need to have emotions. Meta’s AI Chief Yann LeCun has previously said that AI systems of the future will have emotions. He’d made a similar argument, saying that for AI systems to be able to fulfil their goals, they’d need to feel elation if they did achieve those goals, and fear if they didn’t. “AI systems that are smart enough, that are capable of reasoning and planning, will have emotions,” he’d said.
However, the notion also opens a Pandora’s box of technical and ethical questions. How would one program or, more accurately, enable the learning of such emotional responses without unintended consequences? If an AI can genuinely experience a cognitive state akin to annoyance or fear, what are the ethical implications for its treatment and control? These questions are becoming increasingly pertinent as AI models, like large language models (LLMs), demonstrate increasingly complex and sometimes unpredictable emergent behaviors.
Hinton’s comments align with a growing discussion about the nature of intelligence and consciousness as AI systems become more sophisticated. As companies pour billions into AI development, striving for Artificial General Intelligence (AGI), the possibility of emergent emotional-like states becomes doesn’t seem outside the realm of possibility. Hinton’s insights suggest that rather than viewing emotions as a purely human (or biological) domain, we might need to consider their functional role in any sufficiently complex intelligent system navigating a challenging world – and prepare for a future where AI’s “feelings” could be a key to its capabilities.