Listen "Episode 015: Existential Risk"
Episode Synopsis
In this episode, Lloyd discusses the concept of "existential risk" in the context of the current global pandemic as well as the pursuit of Artificial General Intelligence (AGI).
Episode Guide:
1:23 - Intro to Existential Risk
5:04 - The Risks of AGI
6:32 - Instrumental Convergence & Self-Preservation
8:52 - Machines:Humans::Humans:Ants
9:46 - A Word From The Authors
14:18 - Poorly Specified Goals
20:50 - The Urn of Invention & The Vulnerable World Hypothesis
28:01 - Turnkey Totalitarianism
More Info:
Visit us at aiexperience.org
Brought to you by ICED(AI)
Host - Lloyd Danzig
Episode Guide:
1:23 - Intro to Existential Risk
5:04 - The Risks of AGI
6:32 - Instrumental Convergence & Self-Preservation
8:52 - Machines:Humans::Humans:Ants
9:46 - A Word From The Authors
14:18 - Poorly Specified Goals
20:50 - The Urn of Invention & The Vulnerable World Hypothesis
28:01 - Turnkey Totalitarianism
More Info:
Visit us at aiexperience.org
Brought to you by ICED(AI)
Host - Lloyd Danzig
More episodes of the podcast The AI Experience
Episode 033: AI in Warfare (Part 1)
09/02/2022
Episode 032: The Metaverse
19/01/2022
Episode 031: Is The Brain A Computer?
29/10/2021
Episode 030: Surveillance
20/08/2021
Episode 029: AI Avatars
26/03/2021
Episode 028: Finitely Big Numbers
29/01/2021
Episode 027: AI in Sports
25/12/2020
Episode 026: The Turing Test
27/11/2020
Episode 025: Genetic Algorithms
30/10/2020
Episode 024: GANs
30/10/2020
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.