Don’t Want ChatGPT To Reinforce Delusions Among Users That Are In A Mentally-Fragile State: OpenAI CEO Sam Altman

AI has brought with it lots of new ways to improve productivity and well-being, but it’s led to some interesting new problems as well.

OpenAI CEO Sam Altman has addressed the issue of users getting emotionally attached to ChatGPT. After OpenAI had deprecated older models following the release of GPT-5, there had been lots of chatter online by people who were “missing” the older models, which they said were more chatty and understanding than the one they were replaced with. This, along with the outcry over the relatively poor performance and rate-limits of GPT-5, had led OpenAI to restore its legacy models.

“If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake),” Altman said.

“People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks,” he added.

“Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle. There are going to be a lot of edge cases, and generally we plan to follow the principle of “treat adult users like adults”, which in some cases will include pushing back on users to ensure they are getting what they really want,” Altman said.

Altman said that lots of people were benefitting from ChatGPT, and using it as someone to talk to and get advice from. “A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today. If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot,” Altman said.

Altman confessed that it made him uneasy that lots of people were relying on ChatGPT to make decisions. “I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive,” he added.

There has been a growing trend of people using AI systems like ChatGPT to talk to and confide in. Many find ChatGPT to be a good listener, and it can give advice and encouragement. But it’s easy to see how these relationships can end up in uncomfortable territory — thanks to the known sycophancy in ChatGPT’s responses, which makes it agree with and reinforce most things users say, people can end up developing an unhealthy dependence on these systems. There is also an extreme outcome of people developing real relationships with AI — there’s a subreddit called “MyBoyFriendisAI” with 11,000 members, in which people seem to have developed romantic relationships with chatbots. And as AI continues to grow ever more sophisticated, both companies and regulators would do well to keep a keen eye on these developments — these are brand new issues that humanity yet doesn’t know how to tackle or fix.

Posted in AI