Listen "Is AI Alignment Possible before we have Aligned Ourselves? Learning to Trust the Goodness of Being"
Episode Synopsis
Balazs Kegl, a computer scientist with a PhD in AI joins today for a discussion about recognizing the danger inherent in the fears that surround the rapid development of AI. Also available on Youtube.A wide ranging conversation that includes ideas that come from the following resources: Balazs Kegl's Substack: https://balazskegl.substack.com/p/the-case-for-aligning-the-ai-scientist The AI Dilemma: https://youtu.be/xoVJKj8lcNQ Alignment experiments, here both the text and the video are vital. https://community.openai.com/t/agi-alignment-experiments-foundation-vs-instruct-various-agent-models/21031 John Vervaeke on AI: https://www.youtube.com/watch?v=A-_RdKiDbz4 Quote from Martin Shaw: "Limit is the difference between growth and death."
More episodes of the podcast The Meaning Code
Play as a Metaphor for understanding the Dynamic and Creative Aspects of Theology and Existence
07/08/2025
an Linguistic Systems: What can we learn from AI about Human Cognition, Memory and Language?
24/07/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.