The Rise of Small AI Models and Why They Matter More Than You Think

04/12/2025 17 min Temporada 2 Episodio 1

Listen "The Rise of Small AI Models and Why They Matter More Than You Think"

Episode Synopsis

Dr. Shelby Heinecke, Senior AI Researcher at Salesforce, joins Ravi Belani to explain why the future of AI will not belong only to giant models with hundreds of billions of parameters.Shelby makes the case for small language models: compact systems with only a few billion parameters that can run faster, cost less, protect privacy, and still perform at a very high level when they are trained well on focused tasks.In this episode, they dig into:Why small models are a different tool, not a weaker version of large modelsHow fine tuned small models can beat much larger models on specific agentic tasksWhere small models shine most: privacy, speed, cost to serve and on device use casesHow Salesforce built “Tiny Giant,” a 1B parameter model that outperforms much larger models on selected tasksWhat really matters in training: data quality, workflows and trajectory style datasetsHow synthetic data, noise and guardrails help make models more robust in the real worldWhy founders should look closely at on device AI and domain specific small modelsShelby also shares practical advice for founders who want to build in the small model space, and closes with a simple takeaway: do not underestimate small models.If you care about AI agents, privacy, edge computing or future startup opportunities, this conversation will give you a lot to think about.

More episodes of the podcast Alchemist Accelerator: Influencer Series Fireside Chat