A Fast AI Takeoff Is Unlikely: Microsoft CEO Satya Nadella

 Sam Altman believes that a fast AI takeoff scenario is now more likely than he’d previously believed, but Microsoft CEO Satya Nadella doesn’t seem to agree.

Microsoft CEO Satya Nadella has said that a fast AI takeoff scenario is unlikely. He hinted that it would happen not because of technical reasons, but because societies wouldn’t immediately give AI systems the power to take important decisions. A fast AI takeoff refers to how AI systems rapidly become more capable, and playing a much bigger part in human life.

“Today, you cannot deploy these intelligences unless and until there’s someone indemnifying it as a human,” Nadella said on the Dwarkesh podcast. “Even the most powerful AI is essentially working with some delegated authority from some human. You can sort of say, oh, that’s all alignment. And that’s why I think you have to sort of really get these alignments to actually work and be verifiable in some way,” he added.

“But I just don’t think that you can deploy intelligences that are (powerful). So, for example, this AI takeoff problem. maybe a real problem. But before it is a real problem, the real problem will be in the courts. Like no society is going to allow for some human to say, hey, I did that,” he added.

“Well, there’s a lot of societies in the world. And I wonder if any one of them might not have a legal system that might be more amenable. They can be rogue actors. I’m not saying they won’t be rogue actors. I mean, they’re cyber criminals and rogue states. They’re going to be there. But to think that the human society at large doesn’t care about it is also not going to be true, right?”

“So I think we all will care, right? We do, we know how to deal with the rogue states and rogue actors today. The world doesn’t sit around, and say, we’ll tolerate that. So therefore, you know, that’s why I’m glad that we have a world order in which anyone who is a rogue actor in a rogue state (will face) consequences,” he added.

Nadella seemed to be saying that a fast AI takeoff scenario is unlikely because humans will still need to be responsible for the actions that AI agents they deploy take. As such, they’ll ensure that AI agents are working as intended before putting them out in the wild — they could be held liable by courts if these agents end up doing something that’s illegal. This means that AI agents will likely not be deployed immediately when they’re technically feasible, but when they’re thoroughly vetted to be perfuming as expected. This will create friction, and likely delay their deployments. And if a rouge nation does decide to deploy AI agents to act illegally, the world might collectively look to sanction it, which would ensure that AI agents the world over might follow international protocol. It’s a persuasive argument, and shows that many of the fears of a fast AI takeoff might be slightly overblown.

Posted in AI