[HUMAN VOICE] "Alignment Implications of LLM Successes: a Debate in One Act" by Zack M Davis

23/10/2023 26 min
[HUMAN VOICE] "Alignment Implications of LLM Successes: a Debate in One Act" by Zack M Davis

Listen "[HUMAN VOICE] "Alignment Implications of LLM Successes: a Debate in One Act" by Zack M Davis"

Episode Synopsis

Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedDoomimir: Humanity has made no progress on the alignment problem. Not only do we have no clue how to align a powerful optimizer to our "true" values, we don't even know how to make AI "corrigible"—willing to let us correct it. Meanwhile, capabilities continue to advance by leaps and bounds. All is lost.Simplicia: Why, Doomimir Doomovitch, you're such a sourpuss! It should be clear by now that advances in "alignment"—getting machines to behave in accordance with human values and intent—aren't cleanly separable from the "capabilities" advances you decry. Indeed, here's an example of GPT-4 being corrigible to me just now in the OpenAI Playground.Source:https://www.lesswrong.com/posts/pYWA7hYJmXnuyby33/alignment-implications-of-llm-successes-a-debate-in-one-actNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓

More episodes of the podcast LessWrong (Curated & Popular)