Listen "Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)"
Episode Synopsis
We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.We talk to Adam about:* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI) and academia.* Their current research directions & how they're going* Promising agendas & notable gaps in the AI safety researchHosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Adam --Adam Gleave is the CEO of FAR, one of the most prominent not-for-profits focused on research towards AI safety & alignment. He completed his PhD in artificial intelligence (AI) at UC Berkeley, advised by Stuart Russell, a giant in the field of AI. Adam did his PhD on trustworthy machine learning and has dedicated his career to ensuring advanced AI systems act according to human preferences. Adam is incredibly knowledgeable about the world of AI, having worked directly as a researcher and now as leader of a sizable and growing research org.-- Further resources --* Adam * Website: https://www.gleave.me/ * Twitter: https://twitter.com/ARGleave * LinkedIn: https://www.linkedin.com/in/adamgleave/ * Google Scholar: https://scholar.google.com/citations?user=lBunDH0AAAAJ&hl=en&oi=ao* FAR AI * Website: https://far.ai * Twitter: https://twitter.com/farairesearch * LinkedIn: https://www.linkedin.com/company/far-ai/ * Job board: https://far.ai/category/jobs/* AI safety training bootcamps: * ARENA: https://www.arena.education/ * See also: MLAB, WMLB, https://aisafety.training/* Research * FAR's adversarial attack on Katago https://goattack.far.ai/* Ideas for impact mentioned by Adam * Consumer report for AI model safety * Agency model to support AI safety researchers * Compute cluster for AI safety researchers* Donate to AI safety * FAR AI: https://www.every.org/far-ai-inc#/donate/card * ARC Evals: https://evals.alignment.org/ * Berkeley CHAI: https://humancompatible.ai/Recorded Oct 9, 2023
More episodes of the podcast Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)
19/06/2024
Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)
19/06/2024
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)
14/12/2023
Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)
08/11/2023
Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)
13/10/2023
Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)
03/08/2023
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.