Listen "Python RAG: AI for PDFs with Local LLMs"
Episode Synopsis
This episode explores how to build a Python Retrieval-Augmented Generation (RAG) application that leverages local Large Language Models (LLMs) to answer questions based on a collection of PDF documents. The podcast demonstrates how to load PDF documents, create embeddings, and use ChromaDB to build a vector database. It also covers updating the database without rebuilding it entirely and evaluating the quality of AI-generated responses through unit testing. The tutorial uses board game instruction manuals as a case study, showing how to ask questions such as "how do I build a hotel in Monopoly?" and receive answers grounded in the provided documents. By using local LLMs, users can run the application locally on their computers. The video builds upon previous RAG tutorials, offering more advanced features requested by viewers.
More episodes of the podcast Curiosophy: A Future Forward Cast.
Drone Swarmer
28/10/2025
Shodan Unmasking the Internet´s Devices
12/09/2025
Complete guide to smuggling
11/09/2025
Shodan The Search Engine
10/09/2025
Nmap Demystified
05/09/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.