"Counterarguments to the basic AI risk case" by Katja_Grace

27/11/2022 1h 15min
"Counterarguments to the basic AI risk case" by Katja_Grace

Listen ""Counterarguments to the basic AI risk case" by Katja_Grace"

Episode Synopsis

---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technical, ai_safety__governancenarrator: pwqa: kmnarrator_time: 5hqa_time: 2h15m---This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems. To start, here’s an outline of what I take to be the basic case:I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lightsIII. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad Original article:https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-caseNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.