Listen "Ep 255: Does this research explain how LLMs work?"
Episode Synopsis
I take a look at these three papers: 1. https://www.arxiv.org/abs/2512.22471 2. https://arxiv.org/abs/2512.23752 3. https://arxiv.org/abs/2512.22473 Collectively titled "The Bayesian Attention Trilogy" along with some other material - in particular an interview with one of the authors "Vishal Misra" - https://www.engineering.columbia.edu/faculty-staff/directory/vishal-misra For those familiar with my output on this you can probably skip to about halfway through at 42:40. Prior to this is a lot of background on Induction, Bayesianism, Critical Rationalism and so on that people may have heard from me before in different contexts - although for what it's worth these are new ways of expressing those ideas. At the end I am reacting to a video found here: https://www.youtube.com/watch?v=uRuY0ozEm3Q
More episodes of the podcast ToKCast
Ep 251: OK Doomer!
11/12/2025
Ep 250: The Farthest Reaches - Audiobook
12/11/2025
Ep 248: AI and Philosophy of Science
14/10/2025
Ep 247: The Farthest Reaches Part 3
23/09/2025
Ep 246: The Farthest Reaches Part 2
17/09/2025
Ep 245: The Farthest Reaches (Part 1)
12/09/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.