Listen "Fine-tuning on a Budget"
Episode Synopsis
Big models, tight budgets? No problem. In this episode of Pop Goes the stack, hosts Lori MacVittie and Joel Moses talk with Dmitry Kit from F5's AI Center of Excellence about LoRA (Low-Rank Adaptation), the not-so-secret weapon for customizing LLMs without melting your GPU or your wallet. From role-specific agents to domain-aware behavior, we break down how LoRA lets you inject intelligence without retraining the entire brain. Whether you're building AI for IT ops, customer support, or anything in between, this is fine-tuning that actually scales. Learn about the benefits, risks, and practical applications of using LoRA to target specific model behavior, reduce latency, and optimize performance, all for under $1,000. Tune in to understand how LoRA can revolutionize your approach to AI and machine learning.
More episodes of the podcast Pop Goes the Stack
Reshaping the web for AI agents and LLMs
16/12/2025
We're on a brief hiatus, we'll be back soon
21/10/2025
Crossing the streams
07/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.