Listen "Prompting in Tool Results"
Episode Synopsis
If you are using systems prompts, chat completions, and tool completions with OpenAI, you might find it challenging to get the model to follow your prompts. If you are like the show hosts, you often find that there are certain instructions that your bot just refuses to listen to!In the latest episode of The Prompt Desk, show host Bradley Arsenault shares with Justin Macorin the latest technique he has been using to improve the reliability of behaviours that the bot refuses to comply with.That new technique is to add additional instructions in prompts mid-conversation using tool results. Instead of seeing a tool result as simply containing data, you should rethink your tool results as combining both data and a prompt on what the bot should do with that data. With this technique, your show hosts have improved the reliability of their bots significantly.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.meAdd Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_linkHosted on Ausha. See ausha.co/privacy-policy for more information.
More episodes of the podcast The Prompt Desk
What we learned about LLM’s in a year
02/10/2024
Validating Inputs with LLMs
25/09/2024
Why you can't automate everything with LLMs
18/09/2024
Multilingual Prompting
28/08/2024
Safely Executing LLM Code
21/08/2024
How to Rescue AI Innovation at Big Companies
14/08/2024
How UX Will Change With Integrated Advice
07/08/2024
Can custom chips save AI's power problem?
24/07/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.