This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1]
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
August 5th, 2025
Source:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.