Listen "How Can You Address AI Bias In HR? With Jeff Pole, Warden AI"
Episode Synopsis
Jeff Pole runs Warden, an AI assurance provider that evaluates and communicates the trustworthiness of AI systems, with a particular focus on HR technology. Their platform does continuous auditing and testing for AI risks, such as bias, and helps support businesses to be compliant – particularly around new AI regulations – and build trust through transparency.In this conversation, Jeff talks to Sultan about: The changing nature of AI and automation in HR, and the associated risksThe process of continuous auditing, and testing for bias in AI modelsTransparency as a core theme in relevant regulations and guidelines Building trust in models and through user experiences The importance of explainability in AI-powered systems Using AI to actually reduce bias in hiring and other HR processes“There’s also a win-win opportunity here: Where the people-based processes the AI might be complementing are not necessarily what we should be emulating … We know there’s a huge amount of unconscious bias in human processes; for example, in hiring. And the AI can, if carefully designed and properly used, actually be an improvement on that status quo.”– Jeff Pole, Co-founder & CEO, Warden AI
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.