Listen "670: LLaMA: GPT-3 performance, 10x smaller"
Episode Synopsis
How does Meta AI's natural language model, LLaMa compare to the rest? Based on the Chinchilla scaling laws, LLaMa is designed to be smaller but more performant. But how exactly does it achieve this feat? It's all done by training a small model for a longer period of time. Discover how LLaMa compares to its competition, including GPT-3, in this week's episode. Additional materials: www.superdatascience.com/670Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
More episodes of the podcast Super Data Science: ML & AI Podcast with Jon Krohn
955: Nested Learning, Spatial Intelligence and the AI Trends of 2026, with Sadie St. Lawrence
06/01/2026
953: Beyond “Agent Washing”: AI Systems That Actually Deliver ROI, with Dell’s Global CTO John Roese
30/12/2025
952: How to Avoid Burnout and Get Promoted, with “The Fit Data Scientist” Penelope Lafeuille
26/12/2025
948: In Case You Missed It in November 2025
12/12/2025
946: How Robotaxis Are Transforming Cities
05/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.