Listen """Carefully Bootstrapped Alignment" is organizationally hard" by Raemon"
Episode Synopsis
---client: lesswrongproject_id: curatedfeed_id: ai_safety ai_safety__governance narrator: pwqa: mdsqa_time: 0h30m---In addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that. In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by Buck at EA Global London, and more recently with OpenAI's alignment plan. (I think Anthropic's plan has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."Original article:https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hardNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
More episodes of the podcast TYPE III AUDIO (All episodes)
Part 10: How to make your career plan
14/06/2023
Part 8: How to find the right career for you
14/06/2023
Part 6: Which jobs help people the most?
14/06/2023
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.