Listen "Explainability of AI"
Episode Synopsis
What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care?
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think.
From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability
-Why even humans struggle to explain their decisions
-What should be considered a “good enough” explanation
-The importance of stakeholder context in defining "useful" explanations
-Why AI literacy and trust go hand-in-hand
-How concepts from cybersecurity, like zero trust, could inform responsible AI oversight
Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users.
Mentioned in this episode:
🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/
🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25
🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think.
From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability
-Why even humans struggle to explain their decisions
-What should be considered a “good enough” explanation
-The importance of stakeholder context in defining "useful" explanations
-Why AI literacy and trust go hand-in-hand
-How concepts from cybersecurity, like zero trust, could inform responsible AI oversight
Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users.
Mentioned in this episode:
🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/
🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25
🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!
More episodes of the podcast Lunchtime BABLing with Dr. Shea Brown
How to Break Into AI Governance?
30/06/2025
AI Ethicist Reacts to Different Uses of AI
16/06/2025
What is ISO 42001?
02/06/2025
The Importance of AI Governance
28/04/2025
Ensuring LLM Safety
07/04/2025
AI’s Impact on Democracy
24/03/2025
AI Literacy
17/03/2025
Shea Visits RightsCon 2025
03/03/2025