Listen ""Counterarguments to the basic AI risk case" by Katja_Grace"
Episode Synopsis
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technical, ai_safety__governancenarrator: pwqa: kmnarrator_time: 5hqa_time: 2h15m---This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems. To start, here’s an outline of what I take to be the basic case:I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lightsIII. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad Original article:https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-caseNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
More episodes of the podcast TYPE III AUDIO (All episodes)
Part 10: How to make your career plan
14/06/2023
Part 8: How to find the right career for you
14/06/2023
Part 6: Which jobs help people the most?
14/06/2023
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.