#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more

02/03/2025 1h 46min
#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more

Listen "#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more"

Episode Synopsis

A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!Follow Nathan on TwitterListen to The Cognitive Revolution My Twitter & Substack 

More episodes of the podcast Consistently Candid