"Against Almost Every Theory of Impact of Interpretability" by Charbel-Raphaël

20/08/2023 1h 18min
"Against Almost Every Theory of Impact of Interpretability" by Charbel-Raphaël

Listen ""Against Almost Every Theory of Impact of Interpretability" by Charbel-Raphaël"

Episode Synopsis

I gave a talk about the different risk models, followed by an interpretability presentation, then I got a problematic question, "I don't understand, what's the point of doing this?" Hum.Feature viz? (left image) Um, it's pretty but is this useful?[1] Is this reliable? GradCam (a pixel attribution technique, like on the above right figure), it's pretty. But I’ve never seen anybody use it in industry.[2] Pixel attribution seems useful, but accuracy remains the king.[3]Induction heads? Ok, we are maybe on track to retro engineer the mechanism of regex in LLMs. Cool.The considerations in the last bullet points are based on feeling and are not real arguments. Furthermore, most mechanistic interpretability isn't even aimed at being useful right now. But in the rest of the post, we'll find out if, in principle, interpretability could be useful. So let's investigate if the Interpretability Emperor has invisible clothes or no clothes at all!Source:https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1Narrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

More episodes of the podcast LessWrong (Curated & Popular)