Today’s large language models are very smart, capable and talk like humans, but most would say that they don’t have feelings. That could be set to change going forward.
Meta’s Chief AI Scientist Yann LeCun has said that they’re envisioning AI systems that have emotions. “So the blueprint of AI systems that we’re envisioning, that we’re going to build over the next few years — those AI systems will have emotions. It’s a basically inseparable component of the design of those systems,” he said in an interview.
“Why would they have emotions? Because they will be driven by objectives,” LeCun explained. “You give them a goal that they have to accomplish, a task, and their purpose is to accomplish this task, subject to guardrails that are hardwired into their design,” he added.
“And for them to do this, they need a number of components. The first component they need is a way to determine whether this goal that we give them was accomplished or not. And what they also need is what we call a world model. What is a world model is something that we all have — a prefrontal cortex that allows us to imagine what the consequences of our actions. And it’s what allows us to plan a sequence of actions to accomplish a particular goal. Now if you have this ability to predict ahead of time what a sequence of action is going to produce, then you can predict whether a goal is going to be satisfied or not. And you can predict if the outcome is going to be good or bad,” LeCun continued.
“If you predict it’s going to be bad, you feel fear. If you predict it’s going to be good, it’s more like elation, right? So, (you’ll need) the ability to predict, and then to act so as to accomplish those predictions (which) produces the equivalent of emotions. So AI systems that are smart enough, that are capable of reasoning and planning, will have emotions,” he explained.
It’s a pretty radical idea. LeCun seems to be saying that in order to build AI agents that can complete tasks, they will not only need to be given objective functions to determine how far along they’re in the task, but to get them to pursue it, they’ll be given emotions — positive emotions for activities that bring them closer to their goals, and negative emotions for activities which they believe can take them away from their goals. And while these emotions will likely drive these AI agents to complete tasks, it does beg the question — if we can create AI agents with emotions, who’s to say that we aren’t AI agents ourselves, programmed to maximize pleasure and minimize pain, busy at work to reach an objective function that we don’t collectively know of?