Listen ""What I mean by "alignment is in large part about making cognition aimable at all"" by Nate Soares"
Episode Synopsis
https://www.lesswrong.com/posts/NJYmovr9ZZAyyTBwM/what-i-mean-by-alignment-is-in-large-part-about-makingCrossposted from the AI Alignment Forum. May contain more technical jargon than usual.(Epistemic status: attempting to clear up a misunderstanding about points I have attempted to make in the past. This post is not intended as an argument for those points.)I have long said that the lion's share of the AI alignment problem seems to me to be about pointing powerful cognition at anything at all, rather than figuring out what to point it at.It’s recently come to my attention that some people have misunderstood this point, so I’ll attempt to clarify here.
More episodes of the podcast LessWrong (Curated & Popular)
“Little Echo” by Zvi
09/12/2025
“AI in 2025: gestalt” by technicalities
08/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.