Listen """Carefully Bootstrapped Alignment" is organizationally hard" by Raemon"
Episode Synopsis
https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hardIn addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that. In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by Buck at EA Global London, and more recently with OpenAI's alignment plan. (I think Anthropic's plan has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."
More episodes of the podcast LessWrong (Curated & Popular)
"On Owning Galaxies" by Simon Lermen
08/01/2026
"In My Misanthropy Era" by jenn
05/01/2026
"2025 in AI predictions" by jessicata
02/01/2026
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.