Joe Edelman: Co-Founder of Meaning Alignment Institute

06/12/2024 1h 21min Episodio 23
Joe Edelman: Co-Founder of Meaning Alignment Institute

Listen "Joe Edelman: Co-Founder of Meaning Alignment Institute"

Episode Synopsis

What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.Links & References: References:CouchSurfing - Wikipedia | CouchSurfing.org | WebsiteTristan Harris: How a handful of tech companies control billions of minds every day | TED TalkCenter for Humane Technology | WebsiteMEANING ALIGNMENT INSTITUTE | WebsiteReplika - AI Girlfriend/BoyfriendWill AI Improve Exponentially At Value Judgments? - by Matt Prewitt | RadicalxChangeMoral Realism (Stanford Encyclopedia of Philosophy)Summa Theologica - WikipediaWhen Generative AI Refuses To Answer Questions, AI Ethics And AI Law Get Deeply Worried | AI RefusalsAmanda Askell: The 100 Most Influential People in AI 2024 | TIME | Amanda Askells' work at AnthropicOvercoming Epistemology by Charles TaylorGod, Beauty, and Symmetry in Science - Catholic Stand | Thomas Aquinas on symmetryFriedrich Hayek - Wikipedia | “Hayekian”Eliezer Yudkowsky - Wikipedia | “AI policy people, especially in this kind Yudkowskyian scene”Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources | Resource rational (cognitive science term)Papers & posts mentioned[2404.10636] What are human values, and how do we align AI to them? | Paper by Oliver Klingefjord, Ryan Lowe, Joe EdelmanModel Integrity - by Joe Edelman and Oliver Klingefjord | Meaning Alignment Institute SubstackBios:Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.Joe’s Social Links:Meaning Alignment Institute | WebsiteMeaning Alignment Institute (@meaningaligned) / XJoe Edelman (@edelwax) / XMatt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.Matt’s Social Links:ᴍᴀᴛᴛ ᴘʀᴇᴡɪᴛᴛ (@m_t_prewitt) / XProduction Credits:Produced by G. Angela Corpus.Co-Produced and Audio Engineered by Aaron Benavides.Executive Produced by G. Angela Corpus and Matt Prewitt.Intro/Outro music by MagnusMoone, “Wind in the Willows,” is licensed under an Attribution-NonCommercial-ShareAlike 3.0 International License (CC BY-NC-SA 3.0)This is a RadicalxChange Production.
Feedback or ideas for future episodes? Email us at [email protected]. Connect with RadicalxChange Foundation:WebsiteXYouTubeLinkedInDiscordBlueSky