Listen "When The Machine Gets It Wrong: Hallucinations"
Episode Synopsis
Welcome to Hippo Education's Practicing with AI, conversations about medicine, AI, and the people navigating both. This month, Rob and Vicky tackle a common pitfall of AI: hallucinations. What are hallucinations (and is that even the right term)? Why do these types of errors happen? And what can individuals do to reduce the hallucination rate? Plus, Rob and Vicky dive into OpenAI's most recent model release, ChatGPT 5, and analyze its performance against older GPT models. For those who want to dive deeper into OpenAI's HealthBench benchmark: OpenAI's white paper on HealthBench outlines the benchmark's components and delivers performance data on older AI models. https://openai.com/index/healthbench/ Drs. Liu and Liu performed a systematic analysis and outlined HealthBench's strengths and limitations in this paper published in the Journal of Medical Systems in July 2025. Visit speakpipe.com/hippoed to leave a voice message about anything related to AI and medicine: your excitement, your concerns, your own experiences with AI… anything. Your voice might even make it onto a future episode.
More episodes of the podcast Practicing with AI
Bias and Fairness
19/11/2025
Educating with AI
20/08/2025
AI In The Clinic: Hype or Help?
16/07/2025
What Is AI (And Why Should Clinicians Care?)
17/06/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.