Will Still Need Humans To Manage AI Agents: Scale AI CEO Alexandr Wang

AI agents are rapidly becoming more and more capable, but they might need humans for oversight for the foreseeable future.

Alexandr Wang, the former CEO of Scale AI who is now leading the superintelligence efforts at Meta, suggests that the future of work with AI will be less about complete replacement of humans. Instead, it will be more about a complex, collaborative relationship where humans remain firmly in the loop, especially when things go wrong.

Wang emphasizes that the role of a manager for AI agents will be far from a “cushy job.” Instead of idly supervising, humans will be deeply involved in the messy reality of problem-solving. “What are the unique things that humans will do over time? I mean, I think this element of vision is very important,” Wang states. “This element of debugging or fixing when things go wrong. Most of a manager’s job, speaking as a manager, is just putting out fires, dealing with problems, dealing with issues that come up.”

This view contrasts sharply with the idealistic notion of a “Victorian life where all your problems are solved” by a legion of perfectly functioning AI agents. Wang cautions against this oversimplified “extreme reality.” He argues, “I think people often jump to this extreme reality where it’s like, ‘You’re just gonna manage the agents and you’re gonna sort of live this kind of Victorian life where all your problems are solved.’ But no, I think it’s still gonna be pretty complicated.”

The complexity, according to Wang, will lie in the coordination and troubleshooting of these AI systems. “Getting agents to coordinate well with one another and coordinating the workflows and then debugging the issues that come up – these are still complicated issues,” he explains.

To illustrate his point, Wang draws a compelling parallel to the world of self-driving cars, an industry that has grappled with the ‘last mile’ problem for years. “Having seen what happened in self-driving, which was more or less that, you know, it’s easy to get to 90%, very, very hard to get to 99%. I think that something similar will happen with large-scale agent deployments and that final 10% of accuracy will require a lot of work.”

He further highlights the surprisingly high level of human involvement still required in what many perceive as an automated domain. He then shares a surprising statistic: “The companies don’t publish them, but I think the ratio is something like five (self-driven) cars to one teleoperator. Or maybe even less than maybe three cars per teleoperator. So the ratio is like, you know, much lower than people think.”

This isn’t the first time that Wang has highlighted that management might not entirely go away in the age of AI. He’s previously said that AI will cause everyone to be promoted to being a manager of AI agents. Also, Wang has an interesting vantage point in the AI space. Scale AI used to work with companies including Google, OpenAI and Anthropic on creating synthetic data for them, and he likely knows the abilities — and limitations — of their AI products. And like the experience with self-driving shows, while AI agents are showing a lot of promise at the moment, it could be a while before they become truly autonomous.

Posted in AI