Security By Design

19/09/2025 10 min Temporada 1 Episodio 23

Listen "Security By Design"

Episode Synopsis

AI agents are under attack. From prompt injection exploits to invisible system takeovers, new security threats are forcing a rethink of how we build, test, and trust autonomous systems.Welcome to Episode 23 of the Agents Unleashed Podcast, the show that helps you find signal in the noisy world of agentic AI.Hosted by Thomas Maybrier, this episode investigates the growing danger of prompt injection, and how attackers are learning to hijack AI agents to steal data, drain wallets, and impersonate users. But it’s not all bad news: Thomas also explores how open-source workflows, decentralized protocols, and new evaluation tools like Olas Predict may offer a more secure path forward.In This Episode:Why prompt injection is the #1 threat for AI agentsReal-world hacks from BlackHatWhat red teaming has revealed about agent behaviorHow Olas agents handle trust-minimization, verification, and incentivesChapters00:00 – Welcome to Agents Unleashed00:50 – Real-world prompt injection at Black Hat03:21 – NVIDIAGTC demo: multi-agent red teaming04:07 – Why prompt injection is a systemic threat04:56 – The risks of compromised agents05:16 – How do we make AI agents trustworthy?06:19 – How Olas manages riskResources & Links:Agents Unleashed in Singapore → https://olas.network/agents-unleashedOlas Whitepaper → https://olas.network/documents/whitepaper/Whitepaper%20v1.0.pdfListener Survey → https://olas.network/blog/pod-surveyCopyPasta License Attack → https://hiddenlayer.com/innovation-hub/prompts-gone-viral-practical-code-assistant-ai-viruses/Follow Thomas on X → https://x.com/thomasmaybrier🎵 Theme music: “Forward” by Grand Project on Pixabay: https://pixabay.com/users/grand_project-19033897/💬 Like, subscribe, and leave a comment to support the show.Sponsored by Olas: Build and own AI agents → https://olas.network