Listen "“Reward Function Design: a starter pack” by Steven Byrnes"
Episode Synopsis
In the companion post We need a field of Reward Function Design, I implore researchers to think about what RL reward functions (if any) will lead to RL agents that are not ruthless power-seeking consequentialists. And I further suggested that human social instincts constitutes an intriguing example we should study, since they seem to be an existence proof that such reward functions exist. So what is the general principle of Reward Function Design that underlies the non-ruthless (“ruthful”??) properties of human social instincts? And whatever that general principle is, can we apply it to future RL agent AGIs? I don’t have all the answers, but I think I’ve made some progress, and the goal of this post is to make it easier for others to get up to speed with my current thinking. What I do have, thanks mostly to work from the past 12 months, is five frames / terms / mental images for thinking about this aspect of reward function design. These frames are not widely used in the RL reward function literature, but I now find them indispensable thinking tools. These five frames are complementary but related—I think kinda poking at different parts of the same [...] ---Outline:(02:22) Frame 1: behaviorist vs non-behaviorist (interpretability-based) reward functions(02:40) Frame 2: Inner / outer misalignment, specification gaming, goal misgeneralization(03:19) Frame 3: Consequentialist vs non-consequentialist desires(03:35) Pride as a special case of non-consequentialist desires(03:52) Frame 4: Generalization upstream of the reward signals(04:10) Frame 5: Under-sculpting desires(04:24) Some comments on how these relate ---
First published:
December 8th, 2025
Source:
https://www.lesswrong.com/posts/xw8P8H4TRaTQHJnoP/reward-function-design-a-starter-pack
---
Narrated by TYPE III AUDIO.
First published:
December 8th, 2025
Source:
https://www.lesswrong.com/posts/xw8P8H4TRaTQHJnoP/reward-function-design-a-starter-pack
---
Narrated by TYPE III AUDIO.
More episodes of the podcast LessWrong (30+ Karma)
“Announcing RoastMyPost” by ozziegooen
17/12/2025
“The Bleeding Mind” by Adele Lopez
17/12/2025
“Still Too Soon” by Gordon Seidoh Worley
17/12/2025
“Mistakes in the Moonshot Alignment Program and What we’ll improve for next time” by Kabir Kumar
17/12/2025
“Dancing in a World of Horseradish” by lsusr
17/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.