Listen "Ep 243: Why AI Models Hallucinate"
Episode Synopsis
Send us a textWhy do AI models make things up? In this episode, I explain why Large Language Models “hallucinate” and confidently give wrong answers. Using OpenAI’s latest research, I break down what causes these errors, why rare facts are tricky, and how we can make AI more reliable.If you want to understand AI’s mistakes and how to use it safely, this episode is for you.Need More Support? If you’re ready to explore how AI can make your marketing smarter and more efficient, check out my Professional Diploma in AI for Marketers. Or, if you’re looking for in-company training, I can help get your team up to speed. Use the code AISIX10 for a special discount just for podcast listeners. https://publicsectormarketingpros.com
More episodes of the podcast AI SIX Podcast
Ep 254: I Vibe Coded An App
10/12/2025
Ep 253: OpenAI Code Red
09/12/2025
Ep 252: My AI Impact Framework
08/12/2025
Ep 251: In the Hot Seat with Steve Morreale
06/12/2025
Ep 250: The Problem with AI Detectors
05/12/2025
Ep 249: Automate Tasks With ChatGPT
04/12/2025
Ep 248: EU AI Rules Delayed
03/12/2025
Ep 247: My GP is Using AI
02/12/2025
Ep 246: Adobe buys SEMrush
01/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.