Listen "AI hallucinations: Turn on, tune in, beep boop"
Episode Synopsis
ChatGPT isn’t always right. In fact, it’s often very wrong, giving faulty biographical information about a person or whiffing on the answers to simple questions. But instead of saying it doesn’t know, ChatGPT often makes stuff up. Chatbots can’t actually lie, but researchers sometimes call these untruthful performances “hallucinations”—not quite a lie, but a vision of something that isn’t there. So, what’s really happening here and what does it tell us about the way that AI systems err?
Presented by Deloitte
Episode art by Vicky Leta
Presented by Deloitte
Episode art by Vicky Leta
More episodes of the podcast Quartz Obsession
Sleep: The dreamiest new industry
23/07/2024
F1: The global race to the future
02/07/2024
The algorithm: Letters of recommendation
23/04/2024
Video game remakes: Revival of the fittest
16/04/2024
Green steel: Structural change
09/04/2024
VR headsets: We're practically there
02/04/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.