Listen "Prompts gone rogue."
Episode Synopsis
Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI
Learn more about your ad choices. Visit megaphone.fm/adchoices
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI
Learn more about your ad choices. Visit megaphone.fm/adchoices
More episodes of the podcast Research Saturday
Excel-lerating cyberattacks.
27/12/2025
The lies that let AI run amok.
20/12/2025
Root access to the great firewall.
13/12/2025
When macOS gets frostbite.
06/12/2025
A new stealer hiding behind AI hype.
29/11/2025
Two RMMs walk into a phish…
22/11/2025
When clicks turn criminal.
15/11/2025
A fine pearl gone rusty.
08/11/2025
Attack of the automated ops.
01/11/2025
A look behind the lens.
25/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.