Episode #71: The AI Momentum Trap: When Venture Models Replace Business Models

08/01/2026 45 min Episodio 71
Episode #71: The AI Momentum Trap: When Venture Models Replace Business Models

Listen "Episode #71: The AI Momentum Trap: When Venture Models Replace Business Models"

Episode Synopsis


In this episode of the Stewart Squared Podcast, host Stewart Alsop sits down with his father Stewart Alsop II for another fascinating father-son discussion about the tech industry. They dive into the Osborne effect - a business phenomenon from the early computer days where premature product announcements can destroy current sales - and explore how this dynamic is playing out in today's AI landscape. Their conversation covers OpenAI's recent strategic missteps, Google's competitive response with Gemini and TPUs, the circular revenue patterns between major tech companies, and why we might be witnessing fundamental shifts in the AI chip market. They also examine the current state of coding AI tools, the difference between LLMs and true AGI, and whether the tech industry's sophistication can prevent historical bubble patterns from repeating.Timestamps00:00 The Osborne Effect: A Historical Perspective05:53 The Competitive Landscape of AI12:03 Understanding the AI Bubble21:00 The Value of AI in Coding and Everyday Tasks28:47 The Limitations of AI: Creativity and Human Intuition33:42 The Osborne Effect in AI Development41:14 US vs China: The Global AI LandscapeKey Insights1. The Osborne Effect remains highly relevant in today's AI landscape. Adam Osborne's company collapsed in the 1980s after announcing their next computer too early, killing current sales. This same strategic mistake is being repeated by AI companies like OpenAI, which announced multiple products prematurely and had to issue a "code red" to refocus on ChatGPT after Google's unified Gemini offering outcompeted their fragmented approach.2. Google has executed a masterful strategic repositioning in AI. While companies like OpenAI scattered their efforts across multiple applications, Google unified everything into Gemini and developed TPUs (Tensor Processing Units) for inference and reasoning tasks, positioning themselves beyond just large language models toward true AI capabilities and forcing major companies like Anthropic, Meta, and even OpenAI to sign billion-dollar TPU deals.3. The AI industry exhibits dangerous circular revenue patterns reminiscent of the dot-com bubble. Companies are signing binding multi-billion dollar contracts with each other - OpenAI contracts with Oracle for data centers, Oracle buys NVIDIA chips, NVIDIA does deals with OpenAI - creating an interconnected web where everyone knows it's a bubble, but the financial commitments are far more binding than simple stock investments.4. Current AI capabilities represent powerful tools rather than AGI, despite the hype. As Yann LeCun correctly argues, Large Language Models that predict the next token based on existing data cannot achieve true artificial general intelligence. However, AI has become genuinely transformative for specific tasks like coding (where Claude dominates) and language translation, making certain professionals incredibly productive while eliminating barriers to prototyping.5. Anthropic has captured the most valuable market segment by focusing on enterprise programmers. While Microsoft's Copilot failed to gain traction by being bolted onto Office, Anthropic strategically targeted IT departments and developers who have budget authority and real technical needs. This focus on coding and enterprise programming has made them a serious competitive threat to Microsoft's traditional enterprise dominance.6. NVIDIA's massive valuation faces existential risk from the shift beyond LLMs. Trading at approximately 25x revenue compared to Google's 10x, NVIDIA's $4.6 trillion valuation depends entirely on GPU demand for training language models. Google's TPU strategy for inference and reasoning represents a fundamental architectural shift that could undermine NVIDIA's dominance, explaining recent stock volatility when major TPU deals were announced.7. AI will excel at tasks humans don't want to do, while uniquely human capabilities remain irreplaceable. The future likely involves AI handling linguistic processing and routine tasks, physical AI managing robotic applications, and ontologies codifying business logic, but creativity, intuition, and imagination represent fundamentally human capacities that cannot be modeled or replicated through data processing, regardless of scale or sophistication.

More episodes of the podcast Stewart Squared