Listen "Data Preparation Best Practices for Fine Tuning"
Episode Synopsis
In this episode of The Prompt Desk podcast, hosts Bradley Arsenault and Justin Macorin dive deep into the world of fine-tuning large language models. They discuss:The evolution of data preparation techniques from traditional NLP to modern LLMsStrategies for creating high-quality datasets for fine-tuningThe surprising effectiveness of small, well-curated datasetsBest practices for aligning training data with production environmentsThe importance of data quality and its impact on model performancePractical tips for engineers working on LLM fine-tuning projectsWhether you're a seasoned AI practitioner or just getting started with large language models, this episode offers valuable insights into the critical process of data preparation and fine-tuning. Join Brad and Justin as they share their expertise and help you navigate the challenges of building effective AI systems.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.meAdd Justin Macorin and Bradley Arsenault on LinkedIn.Hosted on Ausha. See ausha.co/privacy-policy for more information.
More episodes of the podcast The Prompt Desk
What we learned about LLM’s in a year
02/10/2024
Validating Inputs with LLMs
25/09/2024
Why you can't automate everything with LLMs
18/09/2024
Multilingual Prompting
28/08/2024
Safely Executing LLM Code
21/08/2024
How to Rescue AI Innovation at Big Companies
14/08/2024
How UX Will Change With Integrated Advice
07/08/2024
Prompting in Tool Results
31/07/2024
Can custom chips save AI's power problem?
24/07/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.