Listen "Interpretable AI, with Dr. Faiza Khan Khattak"
Episode Synopsis
How can we build AI systems that are fair, explainable, and truly responsible? In this episode of the #WiAIR podcast, we sit down with Dr. Faiza Khan Khattak, the CTO of an innovative AI startup, with a rich background in both academia and industry. From fairness in machine learning to the realities of ML deployment in healthcare, this conversation is packed with insights, real-world challenges, and powerful reflections.REFERENCES:MLHOps: Machine Learning Health OperationsUsing Chain-of-Thought Prompting for Interpretable Recognition of Social BiasDialectic Preference Bias in Large Language ModelsThe Impact of Unstated Norms in Bias Analysis of Language ModelsCan Machine Unlearning Reduce Social Bias in Language Models?BiasKG: Adversarial Knowledge Graphs to Induce Bias in Large Language Models👉 Whether you're an AI researcher, a developer working on LLMs, or someone passionate about Responsible AI, this episode is for you.📌 Subscribe to hear more inspiring stories and cutting-edge ideas from women leading the future of AI.WiAIR website.Follow us at:♾️ LinkedIn♾️ Bluesky♾️ X (Twitter)#WomenInAI #WiAIR #ResponsibleAI #FairnessInAI #AIHealthcare #ExplainableAI #LLMs #AIethics #BiasMitigation #MachineUnlearning #InterpretableAI #AIstartup #AIforGood
More episodes of the podcast Women in AI Research (WiAIR)
Can We Trust AI Explanations? Dr. Ana Marasović on AI Trustworthiness, Explainability & Faithfulness
09/10/2025
Decentralized AI, with Wanru Zhao
25/06/2025
Robots with Empathy, with Dr. Angelica Lim
14/05/2025
Bias in AI, with Amanda Cercas Curry
03/04/2025
Limits of Transformers, with Nouha Dziri
12/03/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.