Listen "Measuring Bias, Toxicity, and Truthfulness in LLMs With Python"
Episode Synopsis
How can you measure the quality of a large language model? What tools can measure bias, toxicity, and truthfulness levels in a model using Python? This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, returns to discuss techniques and tools for evaluating LLMs With Python.
More episodes of the podcast The Real Python Podcast
Moving Towards Spec-Driven Development
19/12/2025
Advice for Writing Maintainable Python Code
07/11/2025
Evolving Teaching Python in the Classroom
17/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.