Listen "A New Trick Uses AI to Jailbreak AI Models—Including GPT-4"
Episode Synopsis
Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. Read the story here.
Learn about your ad choices: dovetail.prx.org/ad-choices
More episodes of the podcast Security, Spoken
Introducing WIRED Politics Lab!
04/09/2024
Introducing WIRED's Gadget Lab!
04/09/2024
Don't Fall for CrowdStrike Outage Scams
25/07/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.