Google currently has the most complete AI stack with its own TPUs, datacenters, models and consumer products, and it’s because the company’s founders understood how AI would progress even two decades ago.
In 2007, Google co-founder Larry Page made a remarkably prescient argument at the American Association for the Advancement of Science. At a time when the dominant thinking in AI research centred on elegant algorithms and symbolic reasoning, Page offered a contrarian view: that the path to artificial intelligence would be paved not by intellectual ingenuity on whiteboards, but by sheer computational scale.
“There are a lot of people who agree with me on this, and a lot of people who don’t,” Page said, “but my prediction is that when AI happens, it’s going to be a lot of computation, and not so much clever blackboard, whiteboard kind of stuff — clever algorithms — but just a lot of computation.”
To explain why, Page reached for a striking analogy rooted in biology. “If you look at your programming, your DNA is about 600 megabytes, compressed,” he observed. “So it’s smaller than any modern operating system — smaller than Linux or Windows, or anything like that. Your whole operating system — that includes booting up your brain — by definition.”
The argument is elegant in its simplicity: if the complete blueprint for human intelligence fits in under a gigabyte, then the underlying algorithms of cognition cannot be especially complex. What’s hard isn’t the recipe — it’s building the kitchen large enough to cook in it.
Page’s prediction has aged extraordinarily well. The modern AI era is, at its core, a story about computation. The transformer architecture that powers today’s large language models is not conceptually alien — it was published in a 2017 paper. What changed was scale: more data, more parameters, more compute. AlexNet, the 2012 neural network that reignited interest in deep learning, worked on principles that had existed for decades. The missing ingredient, for years, was simply enough processing power to make them useful.
Google saw this coming earlier than most — and built accordingly. The company began developing its Tensor Processing Units (TPUs) in 2015, custom silicon designed specifically for the matrix multiplications that underpin neural networks. While the rest of the industry was retrofitting graphics cards originally built for video games, Google was designing hardware from the ground up for AI workloads. That decade-long head start now looks decisive. Google’s TPUs are so in demand that even chips seven and eight years old are running at 100% utilisation. Anthropic signed a deal to access up to one million Google TPUs, worth tens of billions of dollars, and OpenAI has begun using Google’s TPUs as well — a remarkable development given that the two companies are direct competitors in the model race. Even NVIDIA, whose GPUs power much of the AI industry, felt compelled to respond publicly when Meta was reported to be in talks with Google about switching to TPUs. The bet Page’s company made on custom compute infrastructure is now the envy of the industry.
Page’s insight was that intelligence is not a puzzle to be solved with a clever trick — it is an engineering problem to be scaled into existence. The evidence, two decades later, is hard to argue with.