Listen "Backpropagation: The Engine Behind Modern AI"
Episode Synopsis
An accessible, concise tour of backpropagation: how the forward pass computes outputs, how the backward pass uses the chain rule to compute gradients efficiently, and why caching intermediates matters. A quick history from 1960s-70s precursors to Werbos, Rumelhart–Hinton–Williams' 1986 breakthrough, with NETtalk and TD-Gammon as milestones. We also discuss limitations like local minima and vanishing/exploding gradients, and what these mean for today’s huge models. Brought to you by Embersilk.Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information. Sponsored by Embersilk LLC
More episodes of the podcast Intellectually Curious
The Geometry Behind Egypt's Obelisks
16/01/2026
Meteotsunami: When Weather Makes Waves
14/01/2026
The Noperthedron Breaks Rupert's Law
13/01/2026
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.