Debugging AI Products: From Data Leakage to Evals with Hamel Husain

02/10/2025 1h 26min Episodio 4
Debugging AI Products: From Data Leakage to Evals with Hamel Husain

Listen "Debugging AI Products: From Data Leakage to Evals with Hamel Husain"

Episode Synopsis

How do you know if your AI product is actually any good? Hamel Husain has been answering that question for over 25 years. As a former machine learning engineer and data scientist at Airbnb and GitHub (where he worked on research that paved the way for GitHub Copilot), Hamel has spent his career helping teams debug, measure, and systematically improve complex systems.

In this episode, Hamel joins Teresa Torres to break down the craft of error analysis and evaluation for AI products. Together, they trace his journey from forecasting guest lifetime value at Airbnb to consulting with startups like Nurture Boss, an AI-native assistant for apartment complexes. Along the way, they dive into:
- Why debugging AI starts with thinking like a scientist
- How data leakage undermines models (and how to spot it)
- Using synthetic data to stress-test failure modes
- When to rely on code-based assertions vs. LLM-as-judge evals
- Why your CI/CD set should always include broken cases
- How to prioritize failure modes without drowning in them

Whether you’re a product manager, engineer, or designer, this conversation offers practical, grounded strategies for making your AI features more reliable—and for staying sane while you do it.