Listen ""GPTs are Predictors, not Imitators" by Eliezer Yudkowsky"
Episode Synopsis
(Related text posted to Twitter; this version is edited and has a more advanced final section.)Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet.Koan: Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text? What factors make that task easier, or harder? (If you don't have an answer, maybe take a minute to generate one, or alternatively, try to predict what I'll say next; if you do have an answer, take a moment to review it inside your mind, or maybe say the words out loud.)https://www.lesswrong.com/posts/nH4c3Q9t9F3nJ7y8W/gpts-are-predictors-not-imitators
More episodes of the podcast LessWrong (Curated & Popular)
“Human Values ≠ Goodness” by johnswentworth
12/11/2025
“Condensation” by abramdemski
12/11/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.