Listen "“Considerations for setting the FLOP thresholds in our example international AI agreement” by peterbarnett, Aaron_Scher"
Episode Synopsis
We at the Machine Intelligence Research Institute's Technical Governance Team have proposed an illustrative international agreement (blog post) to halt the development of superintelligence until it can be done safely. For those who haven’t read it already, we recommend familiarizing yourself with the agreement before reading this post. TLDR: This post explains our reasoning for the FLOP thresholds in our proposed international AI agreement: we prohibit training runs above 10 to the 24 FLOP and require monitoring for runs between 10 to the 22–10 to the 24 FLOP. Given fundamental uncertainty about how many FLOP are needed to reach dangerous AI capabilities, we advocate for conservative thresholds. Other considerations include algorithmic progress between now and when the agreement is implemented and the strong capabilities of current AI models. This post aims to explain our reasoning about why we chose the training compute thresholds we did. We refer to these as “FLOP thresholds” (FLOP = floating point operations) to avoid any ambiguity with chips themselves, which are sometimes referred to as “compute”. Many of these considerations are relevant to others thinking about FLOP thresholds, including the hypothetical negotiators/regulators who would modify the thresholds in this agreement, if it [...] ---Outline:(03:20) Why FLOP thresholds at all?(06:08) Primary considerations for where the thresholds should be(11:35) Secondary considerations for where the thresholds should be ---
First published:
November 18th, 2025
Source:
https://www.lesswrong.com/posts/9aJLvMxthWJCNx8Q4/considerations-for-setting-the-flop-thresholds-in-our
---
Narrated by TYPE III AUDIO.
First published:
November 18th, 2025
Source:
https://www.lesswrong.com/posts/9aJLvMxthWJCNx8Q4/considerations-for-setting-the-flop-thresholds-in-our
---
Narrated by TYPE III AUDIO.
More episodes of the podcast LessWrong (30+ Karma)
“A Full Epistemic Stack: Knowledge Commons for the 21st Century” by Oliver Sourbut, Ben Goldhaber
20/12/2025
“2025-Era “Reward Hacking” Does Not Show that Reward Is the Optimization Target” by TurnTrout
19/12/2025
“In defence of the human agency: “Curing Cancer” is the new “Think of the Children”” by Rajmohan H
19/12/2025
“Neuro-scaffold” by DirectedEvolution
19/12/2025
“Wuckles!” by Raemon
19/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.