Listen "Finetuning vs RAG"
Episode Synopsis
Large language models (LLMs) excel at various tasks due to their vast training datasets, but their knowledge can be static and lack domain-specific nuance. Researchers have explored methods like fine-tuning and retrieval-augmented generation (RAG) to address these limitations. Fine-tuning involves adjusting a pre-trained model on a narrower dataset to enhance its performance in a specific domain. RAG, on the other hand, expands LLMs' capabilities, especially in knowledge-intensive tasks,...
More episodes of the podcast AI Blindspot
AIE World's fair Recap of Day 2
24/06/2025
Understanding Agentic Workflows
20/05/2025
Building Effective AI Agents
04/05/2025
DeepSeek-V3 Technical Deep Dive
05/02/2025
Agentic Design Pattern III - Tool Use
20/12/2024
Agentic Design Pattern II - Reflection
02/12/2024
Agentic design pattern I - Planning
04/11/2024
AI Agents
29/10/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.