Listen "Arash Ahmadian on Rethinking RLHF"
Episode Synopsis
Arash Ahmadian is a Researcher at Cohere and Cohere For AI focussed on Preference Training of large language models. He’s also a researcher at the Vector Institute of AI.Featured ReferenceBack to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMsArash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, Sara HookerAdditional ReferencesSelf-Rewarding Language Models, Yuan et al 2024 Reinforcement Learning: An Introduction, Sutton and Barto 1992Learning from Delayed Rewards, Chris Watkins 1989Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning, Williams 1992
More episodes of the podcast TalkRL: The Reinforcement Learning Podcast
Danijar Hafner on Dreamer v4
09/11/2025
Jake Beck, Alex Goldie, & Cornelius Braun on Sutton's OaK, Metalearning, LLMs, Squirrels @ RLC 2025
19/08/2025
Thomas Akam on Model-based RL in the Brain
03/08/2025
NeurIPS 2024 - Posters and Hallways 3
09/03/2025
NeurIPS 2024 - Posters and Hallways 2
04/03/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.