Listen "Qwen-2"
Episode Synopsis
The Qwen2 series of large language models introduces several key enhancements over its predecessors. It employs Grouped Query Attention (GQA) and Dual Chunk Attention (DCA) for improved efficiency and long-context handling, using YARN to rescale attention weights. The models utilize fine-grained Mixture-of-Experts (MoE) and have a reduced KV size. Pre-training data was significantly increased to 7 trillion tokens with more code, math and multilingual content, and post-training involves supervised fine-tuning (SFT) and direct preference optimization (DPO). These changes allow for enhanced performance, especially in coding, mathematics, and multilingual tasks, and better performance in long-context scenarios.
More episodes of the podcast Large Language Model (LLM) Talk
Kimi K2
22/07/2025
Mixture-of-Recursions (MoR)
18/07/2025
MeanFlow
10/07/2025
Mamba
10/07/2025
LLM Alignment
14/06/2025
Why We Think
20/05/2025
Deep Research
12/05/2025
vLLM
04/05/2025
Qwen3: Thinking Deeper, Acting Faster
04/05/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.