Listen "Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)"
Episode Synopsis
We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogonality thesis, inner misalignment, and instrumental convergence.Through his work, Robert has educated thousands on AI safety, including many now working on advocacy, policy, and technical research. His work has been invaluable for teaching and inspiring the next generation of AI safety experts and deepening public support for the cause.Prior to his AIS education work, Robert studied Computer Science at the University of Nottingham.We talk to Rob about:* What got him into AI safety* How he started making educational videos for AI safety* What he's working on now* His top advice for people who also want to do education & advocacy work, really in any field, but especially for AI safety* How he thinks AI safety is currently going as a field of work* What he wishes more people were working on within AI safetyHosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Rob --* Rob Miles AI Safety channel - https://www.youtube.com/@RobertMilesAI* Twitter - https://twitter.com/robertskmiles-- Further resources --* Channel where Rob first started making videos: https://www.youtube.com/@Computerphile* Podcast ep w/ Eliezer Yudkowsky, who first convinced Rob to take AI safety seriously through reading Yudkowsky's writings: https://lexfridman.com/eliezer-yudkowsky/Recording date: Nov 21, 2023
More episodes of the podcast Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)
19/06/2024
Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)
19/06/2024
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)
14/12/2023
Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)
08/11/2023
Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)
13/10/2023
Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)
03/08/2023
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.