EP.12 Securing the Future of AI: Inside Prompt Injection, Jailbreaks & LLM Risks

18/07/2025 45 min Temporada 1 Episodio 12
EP.12 Securing the Future of AI: Inside Prompt Injection, Jailbreaks & LLM Risks

Listen "EP.12 Securing the Future of AI: Inside Prompt Injection, Jailbreaks & LLM Risks"

Episode Synopsis

Discover how cyber experts are tackling the growing security risks of Large Language Models (LLMs) in today’s AI-powered world. In this episode, Mahesh and Vijay explore real-world examples of AI vulnerabilities — from the infamous “$1 Chevrolet” chatbot mishap to advanced exploits like the Grandma attack, DAN jailbreaks, and prompt injections. They also break down why AI-generated code can be risky, and how businesses can safeguard their models with guardrails, scanning tools, and best-practice frameworks.

More episodes of the podcast Decoding The Future