Listen "Machine Morality: Unwanted Bias in AI, Explained"
Episode Synopsis
Today’s government agencies are tasked with providing quality experiences and services to their constituents. More and more, that requires the implementation of AI and automated tools, from chatbots and virtual assistants to enhanced mapping and monitoring capabilities.
These innovations empower government agencies to do more with less, and more importantly, provide citizens and staff with services where and when they need them.
But there’s a bit of a caveat here. While AI has all this potential, it also comes with a number of risks and challenges. Incomplete data sets and human error during the data training process can lead to biased algorithms.
If we’re not careful, AI can end up doing more harm than good.
So, how can government agencies prevent these biases while continuing to innovate?
Introducing Machine Morality, a new podcast from Esri and GovExec’s Studio 2G, where we’ll get to the bottom of some of government’s biggest ethical AI challenges. In this three-part series, we’ll
listen in as experts on AI and ethics from government and industry alike discuss how defense and intelligence leaders can strategically implement the latest AI tools and technologies, while ensuring the technology is used in a way that serves all populations fairly and equally.
This episode will draw from a recent webcast from Defense One and INSA, underwritten by Esri, titled “AI and Ethics: Mitigating Unwanted Bias” in which experts discuss some of today’s most pressing hurdles for AI in government — and how we can begin to address them together.
Check it out.
These innovations empower government agencies to do more with less, and more importantly, provide citizens and staff with services where and when they need them.
But there’s a bit of a caveat here. While AI has all this potential, it also comes with a number of risks and challenges. Incomplete data sets and human error during the data training process can lead to biased algorithms.
If we’re not careful, AI can end up doing more harm than good.
So, how can government agencies prevent these biases while continuing to innovate?
Introducing Machine Morality, a new podcast from Esri and GovExec’s Studio 2G, where we’ll get to the bottom of some of government’s biggest ethical AI challenges. In this three-part series, we’ll
listen in as experts on AI and ethics from government and industry alike discuss how defense and intelligence leaders can strategically implement the latest AI tools and technologies, while ensuring the technology is used in a way that serves all populations fairly and equally.
This episode will draw from a recent webcast from Defense One and INSA, underwritten by Esri, titled “AI and Ethics: Mitigating Unwanted Bias” in which experts discuss some of today’s most pressing hurdles for AI in government — and how we can begin to address them together.
Check it out.
More episodes of the podcast The GovBrief
The future of federal cloud modernization
18/09/2025
Securing the cloud: Zero trust and beyond
02/07/2025
AI and Citizen Services
10/06/2025
AI and Congress
10/06/2025
GenAI and Governance
10/06/2025
AI in the Trump Administration
10/06/2025
AI and the States
10/06/2025
Election Security and AI
10/06/2025
Emerging Trends in AI
10/06/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.