Listen "Model inspection and interpretation at Seldon"
Episode Synopsis
Interpreting complicated models is a hot topic. How can we trust and manage AI models that we can’t explain? In this episode, Janis Klaise, a data scientist with Seldon, joins us to talk about model interpretation and Seldon’s new open source project called Alibi. Janis also gives some of his thoughts on production ML/AI and how Seldon addresses related problems.
Join the discussionChangelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!Sponsors:DigitalOcean – Check out DigitalOcean’s dedicated vCPU Droplets with dedicated vCPU threads. Get started for free with a $50 credit. Learn more at do.co/changelog.
DataEngPodcast – A podcast about data engineering and modern data infrastructure.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Featuring:Janis Klaise – GitHub, LinkedIn, XChris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XShow Notes:
Seldon
Seldon Core
Alibi
Books
“The Foundation Series” by Isaac Asimov
“Interpretable Machine Learning” by Christoph Molnar
Something missing or broken? PRs welcome!
More episodes of the podcast Practical AI
The AI engineer skills gap
10/12/2025
Technical advances in document understanding
02/12/2025
Beyond note-taking with Fireflies
19/11/2025
Autonomous Vehicle Research at Waymo
13/11/2025
Are we in an AI bubble?
10/11/2025
While loops with tool calls
30/10/2025
Tiny Recursive Networks
24/10/2025
Dealing with increasingly complicated agents
16/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.