Listen "AI in the shadows: From hallucinations to blackmail"
Episode Synopsis
In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation. Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:Agentic Misalignment: How LLMs could be insider threatsHugging Face Agents CourseRegister for upcoming webinars here!
More episodes of the podcast Practical AI
The AI engineer skills gap
10/12/2025
Technical advances in document understanding
02/12/2025
Beyond note-taking with Fireflies
19/11/2025
Autonomous Vehicle Research at Waymo
13/11/2025
Are we in an AI bubble?
10/11/2025
While loops with tool calls
30/10/2025
Tiny Recursive Networks
24/10/2025
Dealing with increasingly complicated agents
16/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.