Listen "“On Dwarkesh Patel’s Second Interview With Ilya Sutskever” by Zvi"
Episode Synopsis
Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.
Double click to interact with video
As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary.
If I am quoting directly I use quote marks, otherwise assume paraphrases.
What are the main takeaways?
Ilya thinks training in its current form will peter out, that we are returning to an age of research where progress requires more substantially new ideas.
SSI is a research organization. It tries various things. Not having a product lets it punch well above its fundraising weight in compute and effective resources.
Ilya has 5-20 year timelines to a potentially superintelligent learning model.
SSI might release a product first after all, but probably not?
Ilya's thinking about alignment still seems relatively shallow to me in key ways, but he grasps many important insights and understands he has a problem.
Ilya essentially despairs of having a substantive plan beyond ‘show everyone the thing as early [...] ---Outline:(01:42) Explaining Model Jaggedness(03:15) Emotions and value functions(04:38) What are we scaling?(05:47) Why humans generalize better than models(07:00) Straight-shooting superintelligence(08:39) SSI's model will learn from deployment(09:35) Alignment(17:40) We are squarely an age of research company(22:27) Research taste(25:11) Bonus Coverage: Dwarkesh Patel on AI Progress These Days ---
First published:
December 3rd, 2025
Source:
https://www.lesswrong.com/posts/bMvCNtSH8DiGDTvXd/on-dwarkesh-patel-s-second-interview-with-ilya-sutskever
---
Narrated by TYPE III AUDIO.
Double click to interact with video
As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary.
If I am quoting directly I use quote marks, otherwise assume paraphrases.
What are the main takeaways?
Ilya thinks training in its current form will peter out, that we are returning to an age of research where progress requires more substantially new ideas.
SSI is a research organization. It tries various things. Not having a product lets it punch well above its fundraising weight in compute and effective resources.
Ilya has 5-20 year timelines to a potentially superintelligent learning model.
SSI might release a product first after all, but probably not?
Ilya's thinking about alignment still seems relatively shallow to me in key ways, but he grasps many important insights and understands he has a problem.
Ilya essentially despairs of having a substantive plan beyond ‘show everyone the thing as early [...] ---Outline:(01:42) Explaining Model Jaggedness(03:15) Emotions and value functions(04:38) What are we scaling?(05:47) Why humans generalize better than models(07:00) Straight-shooting superintelligence(08:39) SSI's model will learn from deployment(09:35) Alignment(17:40) We are squarely an age of research company(22:27) Research taste(25:11) Bonus Coverage: Dwarkesh Patel on AI Progress These Days ---
First published:
December 3rd, 2025
Source:
https://www.lesswrong.com/posts/bMvCNtSH8DiGDTvXd/on-dwarkesh-patel-s-second-interview-with-ilya-sutskever
---
Narrated by TYPE III AUDIO.
More episodes of the podcast LessWrong posts by zvi
“Little Echo” by Zvi
08/12/2025
“AI #145: You’ve Got Soul” by Zvi
04/12/2025
AI #144: Thanks For the Models
27/11/2025
The Big Nonprofits Post 2025
27/11/2025
The Big Nonprofits Post 2025
26/11/2025
ChatGPT 5.1 Codex Max
25/11/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.