Listen "Large Language Models Can Self-Improve in Long-context Reasoning"
Episode Synopsis
This research paper investigates the potential for large language models (LLMs) to self-improve in long-context reasoning, which involves processing and understanding complex information spread across long stretches of text. The authors propose a novel approach called SEALONG that leverages the LLMs' ability to generate multiple outputs for a given question and then scores these outputs using a method called Minimum Bayes Risk (MBR). The MBR approach prioritizes outputs that align better with each other, thereby filtering out outputs that might be incorrect or hallucinatory. SEALONG then uses these high-scoring outputs for further training, either through supervised fine-tuning or preference optimization. The authors demonstrate through extensive experiments that SEALONG significantly improves the long-context reasoning performance of LLMs without requiring expert model annotations or human labeling.
More episodes of the podcast Artificial Discourse
BlueLM-V-3B: Algorithm and System Co-Design for Multimodal Large Language Models on Mobile Devices
19/11/2024
A Survey of Small Language Models
12/11/2024
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
11/11/2024
The Llama 3 Herd of Models
10/11/2024
Kolmogorov-Arnold Network (KAN)
09/11/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.