Listen "019: How practically safeguarding generative AI works IRL"
Episode Synopsis
Send us a textKylie Whitehead and Meghan Berton discuss the importance of implementing guardrails to limit a system to its intended scope and ensure the generated outputs are safe. They also touch on topics like user input filtering, hybrid generative AI solutions, hallucination management, and preventing offensive or biased responses. These and highlighting challenges and opportunities of using generative AI in voice assistants and the importance of continuously improving and refining the technology - in today's episode! Follow PolyAI on LinkedIn Watch this and other episodes of the Deep Learning pod on YouTube
More episodes of the podcast Deep Learning with PolyAI
Can journalism teach us how to trust AI?
04/12/2025
What does truly multilingual CX sound like?
13/11/2025
Can we solve AI's "deer-in-headlights" problem? (with Dan Miller, founder of Opus Research)
16/10/2025
Should hyper-growth brands still pick up the phone? (with Austin Towns, CTO of Hello Sugar)
16/10/2025
Your new favorite colleagues aren’t human
04/09/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.