Listen "Max Schwarzer"
Episode Synopsis
Max Schwarzer is a PhD student at Mila, with Aaron Courville and Marc Bellemare, interested in RL scaling, representation learning for RL, and RL for science. Max spent the last 1.5 years at Google Brain/DeepMind, and is now at Apple Machine Learning Research. Featured References Bigger, Better, Faster: Human-level Atari with human-level efficiency Max Schwarzer, Johan Obando-Ceron, Aaron Courville, Marc Bellemare, Rishabh Agarwal, Pablo Samuel Castro Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier Pierluca D'Oro, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, Aaron Courville The Primacy Bias in Deep Reinforcement Learning Evgenii Nikishin, Max Schwarzer, Pierluca D'Oro, Pierre-Luc Bacon, Aaron Courville Additional References Rainbow: Combining Improvements in Deep Reinforcement Learning, Hessel et al 2017 When to use parametric models in reinforcement learning? Hasselt et al 2019 Data-Efficient Reinforcement Learning with Self-Predictive Representations, Schwarzer et al 2020 Pretraining Representations for Data-Efficient Reinforcement Learning, Schwarzer et al 2021
More episodes of the podcast TalkRL: The Reinforcement Learning Podcast
Danijar Hafner on Dreamer v4
09/11/2025
Jake Beck, Alex Goldie, & Cornelius Braun on Sutton's OaK, Metalearning, LLMs, Squirrels @ RLC 2025
19/08/2025
Thomas Akam on Model-based RL in the Brain
03/08/2025
NeurIPS 2024 - Posters and Hallways 3
09/03/2025
NeurIPS 2024 - Posters and Hallways 2
04/03/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.