The 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06

24/09/2025 44 min Temporada 1 Episodio 6
The 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06

Listen "The 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06"

Episode Synopsis

When AI makes a mistake, who's accountable — the developer, the user, or the system itself? In this episode, Meagan Gentry, National AI Practice Senior Manager & Distinguished Technologist at Insight, unpacks the concept of agentic AI and how organizations can embed accountability into autonomous workflows. From the "moral crumple zone" to use case feasibility mapping, Meagan shares frameworks for building trust and driving ROI with AI agents. Jump right to… 00:00: Welcome/intro 03:12: What is agentic AI? 06:45: Why accountability matters now 09:30: Explainability vs. performance tradeoffs 13:10: Ownership and moral crumple zones 17:15: Mapping accountability across AI lifecycle 20:21: Empowering users with AI awareness 25:32: Human in the loop vs. human in command 27:24: What CEOs must ask before greenlighting AI 29:30: Who belongs at the AI strategy table 30:58: Culture shifts and trust in AI agents   🎯 Related resources: • https://www.insight.com/en_US/content-and-resources/blog/the-truth-about-ai-agent-risks-and-what-to-do-about-them.html • https://www.insight.com/en_US/content-and-resources/blog/6-high-impact-agentic-ai-use-cases-executives-should-champion-today.html Learn more: https://www.insight.com/en_US/what-we-do/expertise/data-and-ai/generative-ai.html Subscribe for more episodes.

More episodes of the podcast Insight On