Listen "“Anthropic Commits To Model Weight Preservation” by Zvi"
Episode Synopsis
Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company.
They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced.
These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models.
To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives.
To others, these actions by Anthropic are utterly ludicrous and deserving of [...] ---Outline:(01:31) What Anthropic Is Doing(09:54) Releasing The Weights Is Not A Viable Option(11:35) Providing Reliable Inference Can Be Surprisingly Expensive(14:22) The Interviews Are Influenced Heavily By Context(19:58) Others Don't Understand And Think This Is All Deeply Silly ---
First published:
November 5th, 2025
Source:
https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation
---
Narrated by TYPE III AUDIO.
They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced.
These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models.
To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives.
To others, these actions by Anthropic are utterly ludicrous and deserving of [...] ---Outline:(01:31) What Anthropic Is Doing(09:54) Releasing The Weights Is Not A Viable Option(11:35) Providing Reliable Inference Can Be Surprisingly Expensive(14:22) The Interviews Are Influenced Heavily By Context(19:58) Others Don't Understand And Think This Is All Deeply Silly ---
First published:
November 5th, 2025
Source:
https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation
---
Narrated by TYPE III AUDIO.
More episodes of the podcast LessWrong posts by zvi
“AI #142: Common Ground” by Zvi
13/11/2025
“The Pope Offers Wisdom” by Zvi
12/11/2025
“Kimi K2 Thinking” by Zvi
11/11/2025
“Variously Effective Altruism” by Zvi
10/11/2025
“AI #141: Give Us The Money” by Zvi
06/11/2025
“Crime and Punishment #1” by Zvi
03/11/2025
“AI #140: Trying To Hold The Line” by Zvi
30/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.