AI: Keeping Data Safe

07/06/2024 8 min Temporada 2 Episodio 16

Listen "AI: Keeping Data Safe"

Episode Synopsis

Every impenetrable LLM can be jailbroken. And every
service agreement that guarantees your data, entered into a prompt window, will not be used to train future models can be broken, loopholed or hacked. Once you enter content into a Large Language Model, or post anything onto the web, it’s no longer yours. A guide to keeping data safe in the AI landscape.

More episodes of the podcast Working Humans