Listen "674: Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation)"
Episode Synopsis
Models like Alpaca, Vicuña, GPT4All-J and Dolly 2.0 have relatively small model architectures, but they're prohibitively expensive to train even on a small amount of your own data. The standard model-training protocol can also lead to catastrophic forgetting. In this week's episode, Jon explores a solution to these problems, introducing listeners to Parameter-Efficient Fine-Tuning (PEFT) and the leading approach: Low-Rank Adaptation (LoRA).Additional materials: www.superdatascience.com/674Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
More episodes of the podcast Super Data Science: ML & AI Podcast with Jon Krohn
953: Beyond “Agent Washing”: AI Systems That Actually Deliver ROI, with Dell’s Global CTO John Roese
30/12/2025
952: How to Avoid Burnout and Get Promoted, with “The Fit Data Scientist” Penelope Lafeuille
26/12/2025
948: In Case You Missed It in November 2025
12/12/2025
946: How Robotaxis Are Transforming Cities
05/12/2025
945: AI is a Joke, with Joel Beasley
02/12/2025
944: Gemini 3 Pro: Google’s Back on Top
28/11/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.