Listen "Why Trust in AI Depends on Transparency"
Episode Synopsis
As AI moves deeper into organisations, trust is becoming a design challenge rather than a cultural aspiration.In this episode of The Responsible Edge, Charlie Martin speaks with Steve Garnett about a defining moment from Salesforce’s early cloud years. When systems failed, leadership chose to publish every outage publicly.“We published all of it,” Garnett recalls. “Because we felt that was the right thing to do.”That decision offers a powerful lesson for today’s AI-driven organisations. As algorithms increasingly decide what employees see, how customers are served, and how performance is measured, transparency becomes essential to trust.Grounded in a Cerkl article on AI and company culture, the conversation explores:- Why hiding failure undermines trust- What transparency looks like when systems make decisions- How trust must be designed into AI- Why leaders remain accountable for automated outcomesThis episode is a practical reflection on responsibility, leadership, and what it takes to earn trust when machines act on our behalf.#ResponsibleAI #CompanyCulture #Leadership #FutureOfWork #EthicalBusiness
More episodes of the podcast The Responsible Edge Podcast
The Hidden Costs of Urban Sprawl
31/12/2025
Regenerative Strategy Explained
08/12/2025
How to Decarbonise Fashion Supply Chains
01/12/2025
Why SMEs Must Take Sustainability Seriously
23/11/2025
How to fix the language of sustainability
10/11/2025
The Real Footprint of Professional Advice
01/11/2025
Europe’s Tech Sovereignty Test
25/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.