Listen "Python RAG: AI for PDFs with Local LLMs"
Episode Synopsis
This episode explores how to build a Python Retrieval-Augmented Generation (RAG) application that leverages local Large Language Models (LLMs) to answer questions based on a collection of PDF documents. The podcast demonstrates how to load PDF documents, create embeddings, and use ChromaDB to build a vector database. It also covers updating the database without rebuilding it entirely and evaluating the quality of AI-generated responses through unit testing. The tutorial uses board game instruction manuals as a case study, showing how to ask questions such as "how do I build a hotel in Monopoly?" and receive answers grounded in the provided documents. By using local LLMs, users can run the application locally on their computers. The video builds upon previous RAG tutorials, offering more advanced features requested by viewers.
More episodes of the podcast Curiosophy: A Future Forward Cast.
The $3 Million Hacker
12/01/2026
Mobile Hacking Tools for Ethical Hacking
07/01/2026
Generative AI Security Ethics and GDPR
31/12/2025
Metasploit Deconstructed
29/12/2025
The Invisible War
28/12/2025
Nexus: A Brief History of Information Networks from the Stone Age to AI By Yuval Palestino
26/12/2025
Deconstructing a Black Hat Hacking Tutorial.
26/12/2025
Black Hat Hacking: From Zero to Advanced
26/12/2025
AI 2041.
25/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.