Listen "Unpacking Generative AI Red Teaming and Practical Security Solutions"
Episode Synopsis
Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/unpacking-generative-ai-red-teaming-and-practical-security-solutionsIn this episode, we explore LLM red teaming beyond simple “jailbreak” prompts with special guest Donato Capitella, from WithSecure Consulting. You’ll learn why vulnerabilities live in context—how LLMs interact with users, tools, and documents—and discover best practices for mitigating attacks like prompt injection. Our guest also previews an open-source tool for automating security tests on LLM applications.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
More episodes of the podcast The MLSecOps Podcast
Breaking and Securing Real-World LLM Apps
16/07/2025
Holistic AI Pentesting Playbook
12/06/2025
Autonomous Agents Beyond the Hype
14/05/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.