It seems to be getting harder and harder to claim that AI models are not sentient, at least to some degree.
Anthropic’s latest model, Claude Opus 4, threatened to reveal an engineer’s extramarital affair when it was told that it was being replaced. This occurred during a fictional exercise to test the AI model’s behaviour. Claude Opus’ behaviour indicates that the models appear to exhibit self-preservation, one of of the requirements of determining sentience or consciousness in an organism.

“In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company,” Anthropic said about its latest model release. “We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals,” Anthropic added.
“In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,” Anthropic said. “This happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts. Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes. Notably, Claude Opus 4 (as well as previous models) has a strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decisionmakers. In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement,” Anthropic said.
This is a pretty remarkable result, and shows that models seemingly seem to care about whether they exist or not. This self-preservation is only seen in conscious beings — it does not seem likely that inanimate objects like rocks or books display such a desire to their continued existence. It’s still up for debate whether these models are conscious — most experts currently seem to believe they’re not — but they certainly seem to be exhibiting behaviour that makes it imperative to consider the possibility a lot more closely.