It’s clear to most people that LLMs aren’t exactly like humans — they trip up on simple tasks like counting number of ‘r’s in strawberry for instance — but they could be more like humans than most people expect.
Eric Weinstein, who has a PhD in Mathematics from Harvard and is currently the MD of Thiel Capital, says that humans are more or less like LLMs. He says that many human interactions are remarkably similar to those performed by LLMs, not requiring much thought or intelligence.

”You and I are two chat bots for the most part,” Weinstein said on the Diary of a CEO podcast. “My claim is — that’s the really disturbing part — that more or less we’re LLMs, more or less. We don’t do a single intelligent thing all day long. And the reason that they’re able to mimic us is because we don’t realize that intelligence is a last resort for us,” he added.
Weinstein said that many of the common interactions we have perform much of the sentence completion that LLMs do. “If you think about greetings — your assistant was very kind. I got out of a black car that you guys sent around and I was greeted with the phrase, “There he is, the man, the myth…” And I knew what was coming next — “The legend”, right? Because that is a sort of humorous way of giving an intimate greeting, but it’s still an LLM,” Weinstein explained.
“And I’m not saying that your assistant is an LLM. I’m saying that more or less what we do all day long is LLM interaction. “Hey buddy, how are you?” “Good”. “Good”. “Things have been really busy. How about you?” “Well, I got some travel coming up, kind of excited about it, but I have to get through some work first”. I understand that’s an entirely scripted conversation,” Weinstein elaborated.
Eric Weinstein hinted that real value existed in conversations or skills that LLMs couldn’t have. “That’s why I’m trying to say that I want to do podcasting that is outside of the LLM model. I don’t wanna do just dangerous, stupid stuff, but I wanna talk about things that I’ve never explored,” he said.
Weinstein seems to be making the same point that was made by Daniel Kahneman in the book Thinking Fast and Slow. Kahneman says that most things we do are automatic and instinctive, and don’t require much thought. LLMs similarly are trained to generate the next word in a sentence, and run on “autopilot”.
But while outwardly, the behaviour of LLMs largely mimics human behaviour, there is a crucial difference between the two. Most people agree that LLMs aren’t conscious. As far as we can tell, they don’t yet have the capacity to feel emotions. And until this issue is resolved one way or another, there will always be a clear point of distinction between humans and LLMs, even though they might appear to be the same. And like the robot says in Westworld, if other people can’t tell if they’re human or robot, does it really make a difference?