Listen "Fine-tuning vs RAG"
Episode Synopsis
In this episode we welcome back our good friend Demetrios from the MLOps Community to discuss fine-tuning vs. retrieval augmented generation. Along the way, we also chat about OpenAI Enterprise, results from the MLOps Community LLM survey, and the orchestration and evaluation of generative AI workloads.Join the discussionChangelog++ members save 1 minute on this episode because they made the ads disappear. Join today!Sponsors:Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.comFly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs. Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster! Featuring:Demetrios Brinkmann – XChris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XShow Notes:MLOps CommunityLLM survey reportLLMs in Production Event - Part IIISomething missing or broken? PRs welcome!
More episodes of the podcast Practical AI
The AI engineer skills gap
10/12/2025
Technical advances in document understanding
02/12/2025
Beyond note-taking with Fireflies
19/11/2025
Autonomous Vehicle Research at Waymo
13/11/2025
Are we in an AI bubble?
10/11/2025
While loops with tool calls
30/10/2025
Tiny Recursive Networks
24/10/2025
Dealing with increasingly complicated agents
16/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.