Listen "Inference Time Scaling for Enterprises"
Episode Synopsis
In Episode 3 of No Math AI, Red Hat CEO Matt Hicks and CTO Chris Wright join hosts Akash Srivastava and Isha Puri to explore what it really takes to scale large language model inference time scaling in production. From cost concerns and platform orchestration to the launch of llm-d, they break down the transition from static models to dynamic, reasoning-heavy applications and how open source collaboration is making scalable AI a reality for enterprise teams.
More episodes of the podcast No Math AI
Generative Optimization
23/04/2025
Why Inference-Time Scaling?
18/03/2025