Listen "Rohin Shah"
Episode Synopsis
Dr. Rohin Shah is a Research Scientist at DeepMind, and the editor and main contributor of the Alignment Newsletter.Featured ReferencesThe MineRL BASALT Competition on Learning from Human FeedbackRohin Shah, Cody Wild, Steven H. Wang, Neel Alex, Brandon Houghton, William Guss, Sharada Mohanty, Anssi Kanervisto, Stephanie Milani, Nicholay Topin, Pieter Abbeel, Stuart Russell, Anca DraganPreferences Implicit in the State of the WorldRohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, Anca DraganBenefits of Assistance over Reward Learning Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael D Dennis, Pieter Abbeel, Anca Dragan, Stuart RussellOn the Utility of Learning about Humans for Human-AI CoordinationMicah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, Anca DraganEvaluating the Robustness of Collaborative AgentsPaul Knott, Micah Carroll, Sam Devlin, Kamil Ciosek, Katja Hofmann, A. D. Dragan, Rohin ShahAdditional ReferencesAGI Safety Fundamentals, EA Cambridge
More episodes of the podcast TalkRL: The Reinforcement Learning Podcast
Danijar Hafner on Dreamer v4
09/11/2025
Jake Beck, Alex Goldie, & Cornelius Braun on Sutton's OaK, Metalearning, LLMs, Squirrels @ RLC 2025
19/08/2025
Thomas Akam on Model-based RL in the Brain
03/08/2025
NeurIPS 2024 - Posters and Hallways 3
09/03/2025
NeurIPS 2024 - Posters and Hallways 2
04/03/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.