Listen "Beyond the Transformer: Titans, MIRAS, and the Future of Infinite Context"
Episode Synopsis
We explore Google's Titans and the MIRAS framework, a new paradigm in sequence modeling that replaces static context compression with active test-time learning. We discuss how Titans utilize deep neural memory modules to update parameters on the fly using a gradient-based "surprise metric," prioritizing unexpected information for long-term storage. We cover the theoretical MIRAS blueprint—which unifies sequence models through attentional bias and retention gates—and introduces robust new architectures like Moneta, Yaad, and Memora. We discuss how these models effectively scale to context windows exceeding 2 million tokens, outperforming GPT-4 and Mamba on complex long-context reasoning tasks.
More episodes of the podcast Best AI papers explained
Algorithmic Thinking Theory
10/12/2025
The Universal Weight Subspace Hypothesis
07/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.