Listen "Qwen-2.5"
Episode Synopsis
Qwen2.5 is a series of large language models (LLMs) with significant improvements over previous models, focusing on efficiency, performance, and long sequence handling. Key architectural advancements include Grouped Query Attention (GQA) for better memory management, Mixture-of-Experts (MoE) for enhanced capacity, and Rotary Positional Embeddings (RoPE) for effective long-sequence modeling. Qwen2.5 uses two-phase pre-training and progressive context length expansion to enhance long-context capabilities, along with techniques like YARN, Dual Chunk Attention (DCA), and sparse attention. It also features an expanded tokenizer and uses SwiGLU activation, QKV bias and RMSNorm for stable training.
More episodes of the podcast Large Language Model (LLM) Talk
Kimi K2
22/07/2025
Mixture-of-Recursions (MoR)
18/07/2025
MeanFlow
10/07/2025
Mamba
10/07/2025
LLM Alignment
14/06/2025
Why We Think
20/05/2025
Deep Research
12/05/2025
vLLM
04/05/2025
Qwen3: Thinking Deeper, Acting Faster
04/05/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.