Listen "AISN #8: Why AI could go rogue, how to screen for AI risks, and grants for research on democratic governance of AI."
Episode Synopsis
Yoshua Bengio makes the case for rogue AIAI systems pose a variety of different risks. Renowned AI scientist Yoshua Bengio recently argued for one particularly concerning possibility: that advanced AI agents could pursue goals in conflict with human values. Human intelligence has accomplished impressive feats, from flying to the moon to building nuclear weapons. But Bengio argues that across a range of important intellectual, economic, and social activities, human intelligence could be matched and even surpassed by AI. How would advanced AIs change our world? Many technologies are tools, such as toasters and calculators, which humans use to accomplish our goals. AIs are different, Bengio says. [...] ---Outline:(00:11) Yoshua Bengio makes the case for rogue AI(05:11) How to screen AIs for extreme risks(09:12) Funding for Work on Democratic Inputs to AI(10:43) Links---
First published:
May 30th, 2023
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-8
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
First published:
May 30th, 2023
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-8
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
More episodes of the podcast AI Safety Newsletter
AISN #61: OpenAI Releases GPT-5
12/08/2025
AISN #60: The AI Action Plan
31/07/2025
AISN #57: The RAISE Act
17/06/2025
AISN #56: Google Releases Veo 3
28/05/2025
AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
20/05/2025
AISN #54: OpenAI Updates Restructure Plan
13/05/2025