Listen "Decoding LLM Uncertainties for Better Predictability"
Episode Synopsis
Welcome to another riveting episode of "Grounded Truth"! In this episode, your host John Singleton, co-founder and Head of Success at Watchful, is joined by Shayan Mohanty, CEO of Watchful. Together, they embark on a deep dive into the intricacies of Large Language Models (LLMs). In Watchful's journey through language model exploration, we've uncovered fascinating insights into putting the "engineering" back into prompt engineering. Our latest research focuses on introducing meaningful observability metrics to enhance our understanding of language models. If you'd like to explore on your own, feel free to play with a demo here: https://uncertainty.demos.watchful.io/ Repo can be found here: https://github.com/Watchfulio/uncertainty-demo 💡 What to expect in this episode: - Recap of our last exploration, where we unveiled the role of perceived ambiguity in LLM prompts and its alignment with the "ground truth." - Introduction of two critical measures: Structural Uncertainty (using normalized entropy) and Conceptual Uncertainty (revealing internal cohesion through cosine distances). - Why these measures matter: Assessing predictability in prompts, guiding decisions on fine-tuning versus prompt engineering, and setting the stage for objective model comparisons. 🚀 Join John and Shayan on this quest to make language model interactions more transparent and predictable. The episode aims to unravel complexities, provide actionable insights, and pave the way for a clearer understanding of LLM uncertainties.
More episodes of the podcast Grounded Truth
Is Data Labeling Dead?
11/08/2023
The Application of LLMs for Database DevOps
13/06/2023
What is Prompt Ensembling?
23/05/2023
Engineering with Large Language Models
12/05/2023
Dr. Jennifer Prendki, PhD, Alectio
24/04/2023
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.