Alignment Newsletter #130: A new AI x-risk podcast, and reviews of the field Alignment Newsletter Podcast 24/12/2020 12 min Temporada 1 Episodio 130 Listen "Alignment Newsletter #130: A new AI x-risk podcast, and reviews of the field" Reproducir Descargar episodio Ver en sitio original Episode Synopsis Recorded by Robert Miles More information about the newsletter here More episodes of the podcast Alignment Newsletter Podcast Alignment Newsletter #173: Recent language model results from DeepMind 21/07/2022 Alignment Newsletter #172: Sorry for the long hiatus! 05/07/2022 Alignment Newsletter #171: Disagreements between alignment "optimists" and "pessimists" 23/01/2022 Alignment Newsletter #170: Analyzing the argument for risk from power-seeking AI 08/12/2021 Alignment Newsletter #169: Collaborating with humans without human data 24/11/2021 Alignment Newsletter #168: Four technical topics for which Open Phil is soliciting grant proposals 28/10/2021 Alignment Newsletter #167: Concrete ML safety problems and their relevance to x-risk 20/10/2021 Alignment Newsletter #166: Is it crazy to claim we're in the most important century? 08/10/2021 Alignment Newsletter #165: When large models are more likely to lie 22/09/2021 Alignment Newsletter #164: How well can language models write code? 15/09/2021 Ver todos los episodios Share Facebook Twitter LinkedIn