Listen "Dennis Wei from IBM on In-Context Explainability and the Future of Trustworthy AI"
Episode Synopsis
Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt’s HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM’s new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM’s interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.Guest profile: https://research.ibm.com/people/dennis-weiICX360 Toolkit: https://github.com/IBM/ICX360
More episodes of the podcast Health and Explainable AI Podcast
Karen Colbert on Pitt HexAI
05/06/2025
Anatea Einhorn on Pitt HexAI
11/05/2025
Yanshan Wang on Pitt HexAI
02/04/2025
RAI for Ukraine Program on Pitt HexAI
16/02/2025
Xenophon Papademetris on Pitt HexAI
28/12/2024
Beth Bauer on Pitt HexAI
23/11/2024
Ashwin Kumar on Pitt HexAI
25/09/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.