Listen "How to protect your LLM against Prompt Injections"
Episode Synopsis
In this episode, we discuss, how we might protect prompt-based applications and LLMs from prompt injection. We discuss how data validation was done in the 1960s and modern libraries and techniques that can successfully act as a first line of defense against prompt injection. We touch on the idea that using other types of models, such as decision trees, conventional NLP pipelines, embedding models, or neural networks trained on datasets different from typical LLM training data, might be used to validate inputs before sending them to an LLM.—Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.meAdd Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_linkHosted on Ausha. See ausha.co/privacy-policy for more information.
More episodes of the podcast The Prompt Desk
What we learned about LLM’s in a year
02/10/2024
Validating Inputs with LLMs
25/09/2024
Why you can't automate everything with LLMs
18/09/2024
Multilingual Prompting
28/08/2024
Safely Executing LLM Code
21/08/2024
How to Rescue AI Innovation at Big Companies
14/08/2024
How UX Will Change With Integrated Advice
07/08/2024
Prompting in Tool Results
31/07/2024
Can custom chips save AI's power problem?
24/07/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.