Listen "AI: Keeping Data Safe"
Episode Synopsis
Every impenetrable LLM can be jailbroken. And every
service agreement that guarantees your data, entered into a prompt window, will not be used to train future models can be broken, loopholed or hacked. Once you enter content into a Large Language Model, or post anything onto the web, it’s no longer yours. A guide to keeping data safe in the AI landscape.
service agreement that guarantees your data, entered into a prompt window, will not be used to train future models can be broken, loopholed or hacked. Once you enter content into a Large Language Model, or post anything onto the web, it’s no longer yours. A guide to keeping data safe in the AI landscape.
More episodes of the podcast Working Humans
Small Language Models
20/10/2025
AI & CX
05/10/2025
Model Surfing
23/09/2025
Vibe Coding
11/09/2025
AI Swarms
19/06/2025
AI Organizational Culture
02/06/2025
AI & IP
23/05/2025
Creative AI with David Chislett
09/05/2025
What's Your Stack?
07/05/2025
Embodied AI
07/04/2025