Listen "How to Identify ML Drift Before You Have a Problem"
Episode Synopsis
In this episode of Safe and Sound AI, we dive into the challenge of drift in machine learning models. We break down the key differences between concept and data drift (including feature and label drift), explaining how each affects ML model performance over time. Learn practical detection methods using statistical tools, discover how to identify root causes, and explore strategies for maintaining model accuracy.
Read the article by Fiddler AI and explore additional resources on how AI Observability can help build trust into LLMs and ML models.
Read the article by Fiddler AI and explore additional resources on how AI Observability can help build trust into LLMs and ML models.
More episodes of the podcast Safe and Sound AI
The Anatomy of Agentic Observability
28/10/2025
Should you Observe ML Metrics or Inferences?
12/02/2025
Tracking Drift to Monitor LLM Performance
11/12/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.