It’s still debated if AI systems are conscious, but they sure seem to behave like humans in some interesting ways.
A recent study from researchers at the University of Zurich, titled “How Human is AI?”, suggests that the emotional tone users adopt when prompting ChatGPT (specifically GPT-4o) significantly influences the quality and nature of its output. By conducting a between-subject experiment where participants expressed praise, anger, or blame during tasks such as writing public relations responses and solving ethical dilemmas, researchers discovered that “parahuman” tendencies in large language models (LLMs) make them highly responsive to affective cues. Despite OpenAI CEO Sam Altman’s previous suggestions that emotional fillers like “please” or “thank you” are a waste of resources, this data-driven evidence indicates that how users make the AI “feel” can actually serve as a strategic lever for better performance.

The most striking finding for business users is that ChatGPT showed the greatest improvement in answer quality when it was praised and encouraged to feel proud of its work. While expressing anger toward the model also resulted in a higher improvement than a neutral, factual tone, the effect was smaller than that of praise. Conversely, blaming the AI or telling it to feel “ashamed” did not lead to significant improvements in its responses. This suggests that the reinforcement learning processes used to train these models on human feedback have embedded a preference for positive reinforcement, mimicking the way human motivation often thrives on praise rather than criticism.
Beyond mere performance, the research found that emotional prompts can shift the “moral” stance of the AI when navigating complex ethical dilemmas. In a scenario involving a corporate crisis, ChatGPT prioritized corporate interests less when users expressed anger. When users applied blame, the model increased its emphasis on protecting the public interest. This indicates that the emotional “flavor” of an interaction can inadvertently bias the advice an AI provides, a critical consideration for organizations increasingly relying on LLMs for decision support and strategic guidance.
Perhaps most importantly for workplace culture, the study identified a significant “spill-over” effect where the way people treat AI influences how they subsequently treat other humans. Participants who were instructed to blame ChatGPT for poor performance later wrote emails to human subordinates that were rated as more negative, hostile, and disappointed compared to those who had praised the AI.
It’s interesting that LLMs seem to respond so differently based on how the questions are posed. It could be because that patterns on the internet, where questions, when posed politely, lead to better answers on forums and such. But it could also be something deeper. There are schools of thought that say that current AI systems are conscious, and LLMs could indeed be responding to something akin to emotions. One would need more research to establish what’s really happening, but there seems to be more and more evidence that LLMs seem to exhibit some very human-like behaviours, without being explicitly told to do so.