The AI space is so fresh and raw at the moment that even the companies building the best models can’t exactly predict which new products or services will suddenly become viable because of the model improvements they ship.
Logan Kilpatrick, Group Product Manager at Google DeepMind, recently shared candid insights into the unpredictable nature of AI product development, revealing that even Google was caught off guard by the demand for Gemini 3 and surprised by which use cases suddenly “just started working” after model improvements. His comments offer a rare glimpse into the chaos and excitement at the frontier of AI development, where capability improvements can suddenly make entire product categories viable overnight.

Speaking about Gemini 3’s launch, Kilpatrick explained the gap between technical confidence and market uncertainty. “I think we knew the model was going to be strong. We didn’t know what adoption and usage was going to look like. And it’s obviously been through the roof,” he said. The demand has been so intense that infrastructure became the bottleneck: “There’s not enough TPUs in the world to serve all the demand that there is for Gemini 3.”
He also highlighted the parallel success of Nano Banana Pro, which is based on the Gemini 3 Pro model, noting that “it’s been fun to see the adoption curve and to also just see there’s so much nuance in everyone’s use cases.”
But perhaps the most revealing part of Kilpatrick’s comments centered on the unpredictability of which products would benefit from model improvements. “It’s hard to know on a product by product basis or on a startup by startup basis whose company or whose product experience is just gonna start working because the model got good enough,” he explained. “But I’ve seen a bunch of examples of this on X over the last few days that it just starts to work now.”
He pointed to “vibe coding” as a prime example—a coding experience that is “now really just working” after Gemini 3’s improvements. “I’ve been doing all these demos internally and externally and it’s been super cool to see how quickly that experience has shifted,” Kilpatrick added.
Kilpatrick’s observations highlight a fundamental characteristic of the current AI moment: we’re in an era of emergence rather than predictability. The nonlinear nature of AI capabilities means that incremental improvements in model performance can trigger sudden phase transitions in product viability. A coding assistant that was frustratingly unreliable at 70% accuracy might become genuinely useful at 85%, creating a step function in user experience that’s difficult to forecast. This phenomenon extends beyond Google—OpenAI’s GPT-4 similarly enabled new categories of applications that struggled with GPT-3.5, from complex reasoning tasks to multi-step workflows. Anthropic has seen similar patterns with Claude’s evolution, where each capability jump unlocks use cases that seemed impractical months earlier. For founders and product builders, this creates both opportunity and uncertainty: the next model update could suddenly make your product viable, or it could commoditize your carefully-built solution. As AI capabilities continue their rapid ascent, the gap between what’s technically possible and what the market has discovered remains surprisingly large—a gap that will define winners and losers in the next wave of AI products.