Listen "Using Role-Playing Scenarios to Identify Bias in LLMs"
Episode Synopsis
Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI's AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.
More episodes of the podcast Software Engineering Institute (SEI) Podcast Series
Delivering Next-Generation AI Capabilities
29/09/2025
Mitigating Cyber Risk with Secure by Design
14/07/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.