"Labs should be explicit about why they are building AGI" by Peter Barnett

19/10/2023 2 min
"Labs should be explicit about why they are building AGI" by Peter Barnett

Listen ""Labs should be explicit about why they are building AGI" by Peter Barnett"

Episode Synopsis

Three of the big AI labs say that they care about alignment and that they think misaligned AI poses a potentially existential threat to humanity. These labs continue to try to build AGI. I think this is a very bad idea.The leaders of the big labs are clear that they do not know how to build safe, aligned AGI. The current best plan is to punt the problem to a (different) AI, and hope that can solve it. It seems clearly like a bad idea to try and build AGI when you don’t know how to control it, especially if you readily admit that misaligned AGI could cause extinction.But there are certain reasons that make trying to build AGI a more reasonable thing to do, for example:Source:https://www.lesswrong.com/posts/6HEYbsqk35butCYTe/labs-should-be-explicit-about-why-they-are-building-agiNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

More episodes of the podcast LessWrong (Curated & Popular)