There are concerns that we could’ve run out of human-generated data to train LLMs, but new data for training AI systems could come from an unlikely place.
Bernt Bornich, founder and CEO of humanoid robotics company 1X, has outlined a provocative vision for how artificial intelligence will evolve beyond its current dependence on human-generated training data. In a recent interview, Bornich argued that the future of AI scaling lies not in scraping more internet text or videos, but in deploying fleets of robots that learn by doing real-world tasks. His comments suggest a fundamental shift in how we think about AI developmentāone where embodied robots become both the workers and the teachers of next-generation AI systems.

“Really if you boil it down to the simple parts, it is if your embodiment, if your robot is close enough to a human, then all of these learnings that we have from video, they actually transfer pretty well,” Bornich explained. “And once you can do that, and your robot can now approach almost any task, as long as you can ask for it, your intelligence doesn’t scale with the amount of data you can collect with humans anymore.”
The key insight, according to Bornich, is what happens once robots achieve sufficient human-like embodiment. “It actually scales with the number of robots you’ve deployed. Because now you just need enough robots actually trying to do all these things and trying to do these things are useful, right? So you also produce useful work, but in the process, the robot learns and it quickly gets better.”
This creates a virtuous cycle where robots simultaneously perform valuable labor and generate training data for improving AI models. “And now you just want enough robots across society doing enough different tasks so that you get a very large data coverage,” Bornich said. “And then you’re progressing very well on your scaling loss towards general intelligence. But it’s now independent of actually having to use humans to gather the data through teleoperation.”
The implications of Bornich’s vision are significant for the AI industry. As companies like OpenAI and Google face limitations in available human-generated text data, and as researchers like Meta’s Yann LeCun argue that text data alone is insufficient for human-level intelligence, the industry has been exploring alternatives. Some companies have turned to synthetic data generation as a solution. Bornich’s approach offers a different path: embodied learning through physical robots interacting with the real world.
This strategy could solve multiple problems simultaneously. It addresses the data scarcity issue by creating a self-renewing source of training data. It eliminates the expensive and time-consuming process of human teleoperation for data collection. And crucially, it ensures that AI systems learn from genuine physical interactions rather than purely virtual or text-based environments. If 1X and similar robotics companies can execute on this vision, we may be witnessing the beginning of a new paradigm in AI developmentāone where the path to artificial general intelligence runs through millions of robots learning by doing, rather than through ever-larger language models trained on static human-created content.