Meta Initially Lacked Conviction That Superintelligence Was Coming: Alexandr Wang

Alexandr Wang appears to have — at least in part — stemmed Meta’s AI decline with the Muse Spark model, and he has some interesting insight into why Meta had fallen so far behind in the first place.

Wang, who joined Meta as Chief AI Officer after the company acquired a 49% stake in his company Scale AI for $14.3 billion, spoke on the Core Memory podcast about the state of Meta’s AI efforts when he arrived, and what he set out to fix.

“When I got to Meta, it was clear that there needed to be some reset of the efforts, and some rebuild of our AI efforts to get onto the right trajectory,” Wang said. “Because ultimately, Llama 4 was not on the same trajectory, and so we were behind the frontier. And so we needed to build a plan that would enable us to have a very, very fast velocity, to be able to both catch up and hopefully exceed where the frontier is.”

When pressed on the specifics of what was wrong, Wang pointed to something more foundational than strategy or resources.

“I think the more fundamental ones are just around — a lot of the leading labs, they build their entire organizations around the premise that superintelligence is coming, and that it is very close, and that this is a very realistic thing to believe that we can create and produce. And then you build the entire plan of the lab and the business and what you focus on around this fundamental belief.”

“So that was one of the first things: to just take superintelligence seriously, and then start to rebuild all of your other assumptions around that core premise. I think that was somewhat fundamental.”

When the interviewer characterized this as a lack of “religious conviction,” Wang agreed — and suggested the problem extends well beyond Meta.

“I think this is relatively common, actually, for a lot of people at all the large companies, who don’t necessarily have this conviction. Because if you think about it, it’s a bit of a different construction. A lot of the big companies have very smart people who work on AI, but it’s a little bit different from these startups where, you know, these new efforts start from scratch with this crazy idea that superintelligence is coming.”

Wang was quick to add that he no longer sees this as a problem at Meta. “Obviously, I think now, Meta Superintelligence Labs is — it’s in the name — built around this concept that superintelligence is coming.”

He laid out the principles MSL was organized around: “One is take superintelligence seriously. Two is technical voices are loudest. Three is scientific rigor, focus on basics. And four is make big bets.”


Wang’s candid diagnosis cuts to a tension that has long existed at large technology companies: the gap between AI as a product capability and AI as an existential bet. Startups like OpenAI and Anthropic were built from day one around the idea that they were racing toward something transformative — an assumption that shaped everything from hiring to research priorities. At Meta, by contrast, AI sat within a larger organization optimized for social media, advertising, and consumer products. The culture, Wang implies, simply hadn’t caught up to the stakes.

The consequences of that gap were visible. Llama 4 was criticized as lagging behind rival models — Meta’s own chief scientist Yann LeCun later admitted benchmarks had been “fudged a little bit” — and the reception damaged Meta’s credibility as a serious frontier lab. Meta’s response was to spend aggressively: poaching researchers from OpenAI with reported joining bonuses of up to $100 million, drawing a sharp reaction from OpenAI, whose executives described the raids as feeling like someone had “broken into our home.”

The early results suggest the overhaul is working. Muse Spark, the first model out of Meta Superintelligence Labs, was rebuilt from scratch over nine months and is competitive with frontier models on several benchmarks. It now powers Meta AI across the company’s platforms. Whether it reflects a durable cultural shift — or just a very expensive course correction — is a question that only the next generation of models will answer.

Posted in AI