Listen "Don't Die: AI Alignment, Part 1"
Episode Synopsis
In this pivotal episode of Significant, Dr. David Filippi confronts the existential stakes of AI alignment with his signature blend of scientific rigor, philosophical depth, and personal insight. Drawing on everything from the Chicxulub impact to the fragile humanity of a child’s laughter, he asks the most urgent question of our time: What if we build a superintelligence that doesn’t love our children? This is the hinge of the season—a direct call to those at the cutting edge of AI development to rethink the goals, values, and moral frameworks that will shape our shared future. Not just another AI episode: it’s the reason this podcast exists. Don’t miss it.
More episodes of the podcast SIGNIFICANT.
Superintelligence: The Cat's Out of the Bag
07/03/2025
Stranger Tongues, Stranger Tides
16/02/2025
The Alien Hand and the Shattered World
29/12/2024
Should Everyone Be On A GLP-1?
03/11/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.