Listen "Building RAG Applications with Vector Databases"
Episode Synopsis
An in-depth introduction to Retrieval-Augmented Generation (RAG), explaining how it enhances Large Language Models (LLMs) by integrating external knowledge for accurate, context-aware responses. It further details the RAG pipeline using frameworks like LlamaIndex for document processing and query management, and extensively covers ChromaDB as a vector database for efficient semantic search and filtering in RAG applications
More episodes of the podcast AI Intuition
Agent Builder by Docker
06/09/2025
AI Startup Failure Analysis
03/09/2025
AI Security - Model Denial of Service
02/09/2025
AI Security - Training Data Attacks
02/09/2025
AI Security - Insecure Output Handling
02/09/2025
AI Security - Prompt Injection
02/09/2025
Supervised Fine-Tuning on OpenAI Models
31/08/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.