""Carefully Bootstrapped Alignment" is organizationally hard" by Raemon

21/03/2023 19 min
""Carefully Bootstrapped Alignment" is organizationally hard" by Raemon

Listen """Carefully Bootstrapped Alignment" is organizationally hard" by Raemon"

Episode Synopsis

https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hardIn addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that. In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by Buck at EA Global London, and more recently with OpenAI's alignment plan. (I think Anthropic's plan has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."

More episodes of the podcast LessWrong (Curated & Popular)