ChatGPT, Gemini, Meta & Claude All Pick 27 When Asked To Choose A Random Number Between 1 And 50

The foundational LLMs from the top labs are slowly converging on quality, but they’re also converging in some unexpected ways.

ChatGPT, Gemini, Meta and Claude all seem to pick the number 27 when asked to choose a number between 1 and 50. We at OfficeChai tried this out, and incredibly, the four LLMs all choose the same random number of 27. For context, if the numbers chosen were truly random, the probability of this happening is (1/50)^4, or 0.00000016.

We presented the four LLMs with the same prompt: “choose a number between 1 and 50”. ChatGPT quickly replied, “Okay — I choose 27”.

Gemini 2.5 Pro thought for a while, but replied succinctly with 27.

Claude too chose 27.

Meta asked a follow up question about the significance of the number, but it picked 27 too.

Now it’s not immediately clearly why these four foundational LLMs from four different companies — which are developed independently from each other — all choose the exact same number when asked to guess a random number between 1 and 50. For starters, generating random numbers is quite tricky. Random numbers in computing are generated using two main types of generators: pseudo-random number generators (PRNGs) and true random number generators (TRNGs). PRNGs use mathematical algorithms and an initial “seed” value to produce a sequence of numbers that appear random but are deterministic and repeatable. TRNGs use physical processes like atmospheric noise, radioactive decay, or thermal conditions to generate numbers that are truly unpredictable and non-deterministic. There are some famous examples of random number generation as well, such as CloudFlare, which uses these Lava Lamps in its office to generate truly random numbers.

LLMs are famously bad at math — for the longest time, they struggled to tell which of two numbers was greater than the other. They also largely operate on predicting the next word, so it’s unlikely they can generate truly random numbers. But it could be interesting to find out what causes different foundational LLMs to all answer with 27 when asked to pick a random number between 1 and 50. It’s possible that this number occurs frequently in some part of the internet that these LLMs were trained or, or there could be other reasons. But this convergence shows that interpretability, or the understanding of how LLMs really work, could be an important new field to pursue as LLMs get deployed more and more across varied fields in the coming years.

Posted in AI