Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI

30/01/2024 1h 56min
Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI

Listen "Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI"

Episode Synopsis

A pdf version of this report is available here.Summary. In this report we argue that AI systems capable of large scale scientific research will likely pursue unwanted goals and this will lead to catastrophic outcomes. We argue this is the default outcome, even with significant countermeasures, given the current trajectory of AI development.In Section 1 we discuss the tasks which are the focus of this report. We are specifically focusing on AIs which are capable of dramatically speeding up large-scale novel science; on the scale of the Manhattan Project or curing cancer. This type of task requires a lot of work, and will require the AI to overcome many novel and diverse obstacles.In Section 2 we argue that an AI which is capable of doing hard, novel science will be approximately consequentialist; that is, its behavior will be well described as taking actions in order [...]The original text contained 40 footnotes which were omitted from this narration. --- First published: January 26th, 2024 Source: https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/without-fundamental-advances-misalignment-and-catastrophe --- Narrated by TYPE III AUDIO.

More episodes of the podcast LessWrong (Curated & Popular)