Listen "Can we safely automate alignment research?"
Episode Synopsis
It's really important; we've got a real shot; there are a ton of ways to fail. Text version here: https://joecarlsmith.com/2025/04/30/can-we-safely-automate-alignment-research/. There's also a video and transcript of a talk I gave on this topic here: https://joecarlsmith.com/2025/04/30/video-and-transcript-of-talk-on-automating-alignment-research/
More episodes of the podcast Joe Carlsmith Audio
Controlling the options AIs can pursue
29/09/2025
Giving AIs safe motivations
18/08/2025
The stakes of AI moral status
21/05/2025
AI for AI safety
14/03/2025
Paths and waystations in AI safety
11/03/2025
When should we worry about AI power-seeking?
19/02/2025
What is it to solve the alignment problem?
13/02/2025
How do we solve the alignment problem?
13/02/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.