Listen "Let’s Think About Slowing Down AI"
Episode Synopsis
If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous. The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).Source:https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-aiNarrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
More episodes of the podcast AI Safety Fundamentals
AI and Leviathan: Part I
29/09/2025
d/acc: One Year Later
19/09/2025
A Playbook for Securing AI Model Weights
18/09/2025
Resilience and Adaptation to Advanced AI
18/09/2025
Introduction to AI Control
18/09/2025
The Project: Situational Awareness
18/09/2025
The Intelligence Curse
18/09/2025