“Filler tokens don’t allow sequential reasoning” by Brendan Long

14/12/2025 2 min
“Filler tokens don’t allow sequential reasoning” by Brendan Long

Listen "“Filler tokens don’t allow sequential reasoning” by Brendan Long"

Episode Synopsis

One of my favorite AI papers is “Lets Think Dot By Dot”, which finds that LLMs can use meaningless filler tokens (like “.”) to improve their performance, but I was overestimating the implications until recently[1] and I think other people might be too. The paper finds that LLMs can be trained to use filler tokens to increase their ability to do parallel reasoning tasks[2]. This has been compared to chain of thought, but CoT allows models to increase sequential reasoning, which is more powerful[3]. I now think this paper should be taken as evidence against LLMs ability to perform long-term reasoning[4] in secret[5]. This means that if a problem can be broken down into sub-problems, but the model isn’t wide enough to process it in one pass, the model can instead parallelize across multiple filler token positions and then combine the results. However, if the problem requires step-by-step thinking and the model isn’t deep enough, filler tokens don’t help. In comparison, Chain of Thought helps in both situations. My metaphor for this is that filler tokens allow a model to dynamically increase the size of layers, but CoT allows the model to dynamically add layers. The problem Every layer [...] The original text contained 6 footnotes which were omitted from this narration. ---
First published:
December 13th, 2025

Source:
https://www.lesswrong.com/posts/KFkKPbuYCWc9ygpRp/filler-tokens-don-t-allow-sequential-reasoning
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

More episodes of the podcast LessWrong (30+ Karma)