Listen "AI: Keeping Data Safe"
Episode Synopsis
Every impenetrable LLM can be jailbroken. And every
service agreement that guarantees your data, entered into a prompt window, will not be used to train future models can be broken, loopholed or hacked. Once you enter content into a Large Language Model, or post anything onto the web, it’s no longer yours. A guide to keeping data safe in the AI landscape.
service agreement that guarantees your data, entered into a prompt window, will not be used to train future models can be broken, loopholed or hacked. Once you enter content into a Large Language Model, or post anything onto the web, it’s no longer yours. A guide to keeping data safe in the AI landscape.
More episodes of the podcast Working Humans
Deskless and AI-Powered
08/01/2026
Unchained AI
20/11/2025
AI Leadership Lag
04/11/2025
Small Language Models
20/10/2025
AI & CX
05/10/2025
Model Surfing
23/09/2025
Vibe Coding
11/09/2025
AI Swarms
19/06/2025
AI Organizational Culture
02/06/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.