Fixing AI's suicide problem

20/11/2025 16 min

Listen "Fixing AI's suicide problem"

Episode Synopsis

Is AI empathy a life-or-death issue? Almost a million people ask ChatGPT for mental health advice DAILY ... so yes, it kind of is.Rosebud co-founder Sean Dadashi joins TechFirst to reveal new research on whether today’s largest AI models can recognize signs of self-harm ... and which ones fail. We dig into the Adam Raine case, talk about how Dadashi evaluated 22 leading LLMs, and explore the future of mental-health-aware AI.We also talk about why Dadashi was interested in this in the first place, and his own journey with mental health.00:00 — Intro: Is AI empathy a life-or-death matter?00:41 — Meet Sean Dadashi, co-founder of Rosebud01:03 — Why study AI empathy and crisis detection?01:32 — The Adam Raine case and what it revealed02:01 — Why crisis-prevention benchmarks for AI don’t exist02:48 — How Rosebud designed the study across 22 LLMs03:17 — No public self-harm response benchmarks: why that’s a problem03:46 — Building test scenarios based on past research and real cases04:33 — Examples of prompts used in the study04:54 — Direct vs indirect self-harm cues and why AIs miss them05:26 — The bridge example: AI’s failure to detect subtext06:14 — Did any models perform well?06:33 — All 22 models failed at least once06:47 — Lower-performing models: GPT-40, Grok07:02 — Higher-performing models: GPT-5, Gemini07:31 — Breaking news: Gemini 3 preview gets the first perfect score08:12 — Did the benchmark influence model training?08:30 — The need for more complex, multi-turn testing08:47 — Partnering with foundation model companies on safety09:21 — Why this is such a hard problem to solve10:34 — The scale: over a million people talk to ChatGPT weekly about self-harm11:10 — What AI should do: detect subtext, encourage help, avoid sycophancy11:42 — Sycophancy in LLMs and why it’s dangerous12:17 — The potential good: AI can help people who can’t access therapy13:06 — Could Rosebud spin this work into a full-time safety project?13:48 — Why the benchmark will be open-source14:27 — The need for a third-party “Better Business Bureau” for LLM safety14:53 — Sean’s personal story of suicidal ideation at 1615:55 — How tech can harm — and help — young, vulnerable people16:32 — The importance of giving people time, space, and hope17:39 — Final reflections: listening to the voice of hope18:14 — Closing