Listen "“Announcing: OpenAI’s Alignment Research Blog” by Naomi Bashkansky"
Episode Synopsis
The OpenAI Alignment Research Blog launched today at 11 am PT! With 1 introductory post, and 2 technical posts. Blog: https://alignment.openai.com/ Thread on X: https://x.com/j_asminewang/status/1995569301714325935?t=O5FvxDVP3OqicF-Y4sCtxw&s=19 Speaking purely personally: when I joined the Alignment team at OpenAI in January, I saw there was more safety research than I'd expected. Not to mention interesting thinking on the future of alignment. But that research & thinking didn't really have a place to go, considering it's often too short or informal for the main OpenAI blog, and most OpenAI researchers aren't on LessWrong. I'm hoping the blog is a more informal, lower-friction home than the main blog, and this new avenue of publishing encourages sharing and transparency. ---
First published:
December 1st, 2025
Source:
https://www.lesswrong.com/posts/tK9waFKEW48exfrXC/announcing-openai-s-alignment-research-blog
---
Narrated by TYPE III AUDIO.
First published:
December 1st, 2025
Source:
https://www.lesswrong.com/posts/tK9waFKEW48exfrXC/announcing-openai-s-alignment-research-blog
---
Narrated by TYPE III AUDIO.
More episodes of the podcast LessWrong (30+ Karma)
“Announcing RoastMyPost” by ozziegooen
17/12/2025
“The Bleeding Mind” by Adele Lopez
17/12/2025
“Still Too Soon” by Gordon Seidoh Worley
17/12/2025
“Mistakes in the Moonshot Alignment Program and What we’ll improve for next time” by Kabir Kumar
17/12/2025
“Dancing in a World of Horseradish” by lsusr
17/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.