Listen "AI Trust, Eval Frameworks, and Why Data Quality Matters"
Episode Synopsis
In this episode of Generation AI, hosts JC and Ardis tackle one of the most pressing concerns in higher education today: how to trust AI outputs. They explore the psychology of trust in technology, the evaluation frameworks used to measure AI accuracy, and how Retrieval Augmented Generation (RAG) helps ground AI responses in factual data. The conversation offers practical insights for higher education professionals who want to implement AI solutions but worry about accuracy and reliability. Listeners will learn how to evaluate AI systems, what questions to ask vendors, and why having public-facing content is crucial for effective AI implementation.Introduction: The Trust Challenge in AI (00:00:06)JC Bonilla and Ardis Kadiu introduce the topic of trusting AI outputsContrasting traditional predictive modeling metrics with new AI evaluation methodsUnderstanding that trust is both earned and lost through interactionsThe Psychology of Trust in AI (00:03:35)How human psychology frameworks for trust transfer to technologyChallenge appraisal (seeing AI as enhancement) versus threat appraisal (seeing AI as risky)Example: How autonomous driving shows trust being built or lost through micro-decisionsThe importance of making AI systems more predictable to humansEvaluating AI Outputs: The Evals Framework (00:11:41)Moving from traditional machine learning metrics to new evaluation methodsHow OpenAI Evals works as a standard for measuring AI performanceCreating test sets with thousands of variations to check AI outputsThe concept of "AI checking on AI" for more thorough evaluationElement451's achievement of 94-95% accuracy rates on their evaluationsRetrieval Augmented Generation (RAG) Explained (00:27:23)RAG as an "open book exam" approach for AI systemsHow data is processed, categorized, and made searchableThe importance of re-ranking information to find the most relevant contentHow multiple documents can be combined to create accurate answersAddressing Common AI Trust Concerns (00:33:31)Reducing hallucinations through proper grounding in source materialWhy "garbage in, garbage out" fears are often overblownUsing public-facing content as reliable data sourcesThe value of traceable sources in building confidence in AI responsesConclusion: Building Earned Trust (00:38:11)Trust in AI comes from reliability and transparencyThe importance of asking the right questions when selecting AI partnersHow to distinguish between companies just talking about AI versus implementing best practices
- - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you’ll like other Enrollify shows too! Enrollify is made possible by Element451 — The AI Workforce Platform for Higher Ed. Learn more at element451.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
- - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you’ll like other Enrollify shows too! Enrollify is made possible by Element451 — The AI Workforce Platform for Higher Ed. Learn more at element451.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
More episodes of the podcast Generation AI
Breaking Down Vanderbilt’s AI Playbook
30/09/2025
How People Are Actually Using ChatGPT
23/09/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.