Listen "Low-Stakes Alignment"
Episode Synopsis
Right now I’m working on finding a good objective to optimize with ML, rather than trying to make sure our models are robustly optimizing that objective. (This is roughly “outer alignment.”) That’s pretty vague, and it’s not obvious whether “find a good objective” is a meaningful goal rather than being inherently confused or sweeping key distinctions under the rug. So I like to focus on a more precise special case of alignment: solve alignment when decisions are “low stakes.” I think this case effectively isolates the problem of “find a good objective” from the problem of ensuring robustness and is precise enough to focus on productively. In this post I’ll describe what I mean by the low-stakes setting, why I think it isolates this subproblem, why I want to isolate this subproblem, and why I think that it’s valuable to work on crisp subproblems. Source:https://www.alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignmentNarrated for AI Safety Fundamentals by TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
More episodes of the podcast AI Safety Fundamentals
AI and Leviathan: Part I
29/09/2025
d/acc: One Year Later
19/09/2025
A Playbook for Securing AI Model Weights
18/09/2025
Resilience and Adaptation to Advanced AI
18/09/2025
Introduction to AI Control
18/09/2025
The Project: Situational Awareness
18/09/2025
The Intelligence Curse
18/09/2025