Listen "Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense"
Episode Synopsis
Status: Vague, sorry. The point seems almost tautological to me, and yet also seems like the correct answer to the people going around saying “LLMs turned out to be not very want-y, when are the people who expected 'agents' going to update?”, so, here we are.Okay, so you know how AI today isn't great at certain... let's say "long-horizon" tasks? Like novel large-scale engineering projects, or writing a long book series with lots of foreshadowing?(Modulo the fact that it can play chess pretty well, which is longer-horizon than some things; this distinction is quantitative rather than qualitative and it's being eroded, etc.)And you know how the AI doesn't seem to have all that much "want"- or "desire"-like behavior?(Modulo, e.g., the fact that it can play chess pretty well, which indicates a [...]---First published: November 24th, 2023 Source: https://www.lesswrong.com/posts/AWoZBzxdm4DoGgiSj/ability-to-solve-long-horizon-tasks-correlates-with-wanting --- Narrated by TYPE III AUDIO.
More episodes of the podcast LessWrong (Curated & Popular)
“Little Echo” by Zvi
09/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.