Listen "Linear Transformers"
Episode Synopsis
Linear Transformers address the computational limitations of standard Transformer models, which have a quadratic complexity, O(n^2), with respect to input sequence length. Linear Transformers aim for linear complexity, O(n), making them suitable for longer sequences. They achieve this through methods such as low-rank approximations, local attention, or kernelized attention. Examples include Linformer (low-rank matrices), Longformer (sliding window attention), and Performer (kernelized attention). Efficient attention, a type of linear attention, interprets keys as template attention maps and aggregates values into global context vectors, thus differing from dot-product attention which synthesizes pixel-wise attention maps. This approach allows more efficient resource usage in domains with large inputs or tight constraints.
More episodes of the podcast Large Language Model (LLM) Talk
Kimi K2
22/07/2025
Mixture-of-Recursions (MoR)
18/07/2025
MeanFlow
10/07/2025
Mamba
10/07/2025
LLM Alignment
14/06/2025
Why We Think
20/05/2025
Deep Research
12/05/2025
vLLM
04/05/2025
Qwen3: Thinking Deeper, Acting Faster
04/05/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.