Is AI Alignment Possible before we have Aligned Ourselves? Learning to Trust the Goodness of Being

09/05/2023 1h 20min
Is AI Alignment Possible before we have Aligned Ourselves? Learning to Trust the Goodness of Being

Listen "Is AI Alignment Possible before we have Aligned Ourselves? Learning to Trust the Goodness of Being"

Episode Synopsis

Balazs Kegl, a computer scientist with a PhD in AI joins today for a discussion about recognizing the danger inherent in the fears that surround the rapid development of AI. Also available on Youtube.A wide ranging conversation that includes ideas that come from the following resources: Balazs Kegl's Substack: https://balazskegl.substack.com/p/the-case-for-aligning-the-ai-scientist The AI Dilemma: https://youtu.be/xoVJKj8lcNQ Alignment experiments, here both the text and the video are vital. https://community.openai.com/t/agi-alignment-experiments-foundation-vs-instruct-various-agent-models/21031 John Vervaeke on AI: https://www.youtube.com/watch?v=A-_RdKiDbz4 Quote from Martin Shaw: "Limit is the difference between growth and death."

More episodes of the podcast The Meaning Code