ChatGPT’s Performance Improves If Question Asked With A Smiley: OpenAI Exec

Ever since ChatGPT first burst on to the scene, both engineers and regular people have been experimenting with their prompts, tweaking them to get the LLM to deliver the answers they require. But it appears that there still exist ways to get ChatGPT to improve its performance that are accessible — and not widely known.

ChatGPT can give better answers if the user appends a smiley face to its prompt, an OpenAI executive has said. “There’s a lot of really small, silly things — like adding a smiley face — that increases the performance of the model,” said OpenAI’s head of developer relations Logan Kilpatrick on a podcast. He also said that ChatGPT’s performance improved if the prompt asked it to take a break and then answer the question.

“If you think about it, it’s because the corpus of information that’s trained these models is the same things that humans have sent back and forth to each other. So if I tell a human to take a break and then come back to work, they’re fresher and able to answer questions better and work better,” he added.

Kilpatrick said that models too end up behaving this way. “Very similar things are true for these models. And again, when I see a smiley face at the end of someone’s message, I feel empowered. Like this is going to be a positive interaction, and I should be more inclined to give them a great answer and spend more effort on the thing that they asked me for,” he said.

Kilpatrick cautioned that the difference in performance in adding a smiley face wouldn’t necessarily be massive in all cases. “The challenge with all this stuff is it’s very nuanced, and it’s a small jump in performance. You could imagine on the order of 1% or 2%, which for a few sentences might not even be a discernible difference,” he said. “But if you’re generating an entire saga of text, the smiley face could actually make a material difference for you,” he continued.

Adding a smiley face or asking a model to take a break might be the latest bit of prompt engineering that people have experimented with to improve ChatGPT’s performance. In the past, people have asked models to think of a problem step by step to get better answers. Some more extreme examples have included threatening the model of dire consequences (“there will be nuclear war”) in order to trick it into doing its best. But it turns out that positive reinforcement by adding a happy smiley, or asking it to take a break, could work well too. Models, at the end of it, might be more like people than we currently realize.