Listen "“Research Reflections” by abramdemski"
Episode Synopsis
Over the decade I've spent working on AI safety, I've felt an overall trend of divergence; research partnerships starting out with a sense of a common project, then slowly drifting apart over time. It has been frequently said that AI safety is a pre-paradigmatic field. This (with, perhaps, other contributing factors) means researchers have to optimize for their own personal sense of progress, based on their own research taste. In my experience, the tails come apart; eventually, two researchers are going to have some deep disagreement in matters of taste, which sends them down different paths. Until the spring of this year, that is. At the Agent Foundations conference at CMU,[1] something seemed to shift, subtly at first. After I gave a talk -- roughly the same talk I had been giving for the past year -- I had an excited discussion about it with Scott Garrabrant. Looking back, it wasn't so different from previous chats we had had, but the impact was different; it felt more concrete, more actionable, something that really touched my research rather than remaining hypothetical. In the subsequent weeks, discussions with my usual circle of colleagues[2] took on a different character -- somehow [...] The original text contained 3 footnotes which were omitted from this narration. ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/4gosqCbFhtLGPojMX/research-reflections
---
Narrated by TYPE III AUDIO.
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/4gosqCbFhtLGPojMX/research-reflections
---
Narrated by TYPE III AUDIO.
More episodes of the podcast LessWrong (30+ Karma)
“How Colds Spread” by RobertM
18/11/2025
“AI 2025 - Last Shipmas” by Simon Lermen
18/11/2025
“Varieties Of Doom” by jdp
18/11/2025
“Lobsang’s Children” by Tomás B.
17/11/2025
“Close open loops” by habryka
17/11/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.