Listen "LLM Reasoning"
Episode Synopsis
Large language models (LLMs) demonstrate some reasoning abilities, though it's debated whether they truly reason or rely on information retrieval. Prompt engineering enhances reasoning, employing techniques like Chain-of-Thought (CoT), which involves intermediate reasoning steps. Multi-stage prompts, problem decomposition, and external tools are also used. Multi-agent discussions may not surpass a well-prompted single LLM. Research explores knowledge graphs and symbolic solvers to improve LLM reasoning, and methods to make LLMs more robust against irrelevant context. The field continues to investigate techniques to improve reasoning in LLMs.
More episodes of the podcast Large Language Model (LLM) Talk
Kimi K2
22/07/2025
Mixture-of-Recursions (MoR)
18/07/2025
MeanFlow
10/07/2025
Mamba
10/07/2025
LLM Alignment
14/06/2025
Why We Think
20/05/2025
Deep Research
12/05/2025
vLLM
04/05/2025
Qwen3: Thinking Deeper, Acting Faster
04/05/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.