Listen "Book Review: Reframing Superintelligence"
Episode Synopsis
Ten years ago, everyone was talking about superintelligence, the singularity, the robot apocalypse. What happened? I think the main answer is: the field matured. Why isn't everyone talking about nuclear security, biodefense, or counterterrorism? Because there are already competent institutions working on those problems, and people who are worried about them don't feel the need to take their case directly to the public. The past ten years have seen AI goal alignment reach that level of maturity too. There are all sorts of new research labs, think tanks, and companies working on it – the Center For Human-Compatible AI at UC Berkeley, OpenAI, Ought, the Center For The Governance Of AI at Oxford, the Leverhulme Center For The Future Of Intelligence at Cambridge, etc. Like every field, it could still use more funding and talent. But it's at a point where academic respectability trades off against public awareness at a rate where webzine articles saying CARE ABOUT THIS OR YOU WILL DEFINITELY DIE are less helpful. One unhappy consequence of this happy state of affairs is that it's harder to keep up with the field. In 2014, Nick Bostrom wrote Superintelligence: Paths, Dangers, Strategies, giving a readable overview of what everyone was thinking up to that point. Since then, things have been less public-facing, less readable, and more likely to be published in dense papers with a lot of mathematical notation. They've also been – no offense to everyone working on this – less revolutionary and less interesting. This is one reason I was glad to come across Reframing Superintelligence: Comprehensive AI Services As General Intelligence by Eric Drexler, a researcher who works alongside Bostrom at Oxford's Future of Humanity Institute. This 200 page report is not quite as readable as Superintelligence; its highly-structured outline form belies the fact that all of its claims start sounding the same after a while. But it's five years more recent, and presents a very different vision of how future AI might look.
More episodes of the podcast Astral Codex Ten Podcast
Highlights From The Comments On Vibecession
10/01/2026
ACX/Metaculus Prediction Contest 2026
10/01/2026
Against Against Boomers
10/01/2026
The Pledge
10/01/2026
Links For December 2025
06/01/2026
The New AI Consciousness Paper
02/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.