Sveriges mest populära poddar
80,000 Hours - Narrations

[Problem profile] “Risks from power-seeking AI systems” by Cody Fenwick, Zershaaneh Qureshi

1 tim 15 min1 juli 2025

The future of AI is difficult to predict. But while AI systems could have substantial positive effects, there's a growing consensus about the dangers of AI.

Narrated by AI.

---

Outline:

(01:42) Summary

(02:34) Our overall view

(05:16) Why are risks from power-seeking AI a pressing world problem?

(06:45) 1. Humans will likely build advanced AI systems with long-term goals

(11:23) 2. AIs with long-term goals may be inclined to seek power and aim to disempower humanity

(12:26) We don't know how to reliably control the behaviour of AI systems

(15:56) There's good reason to think that AIs may seek power to pursue their own goals

(19:40) Advanced AI systems seeking power might be motivated to disempower humanity

(22:43) 3. These power-seeking AI systems could successfully disempower humanity and cause an existential catastrophe

(23:16) The path to disempowerment

(27:56) Why this would be an existential catastrophe

(29:35) How likely is an existential catastrophe from power-seeking AI?

(32:17) 4. People might create power-seeking AI systems without enough safeguards, despite the risks

(32:59) People may think AI systems are safe, when they in fact are not

(36:13) People may dismiss the risks or feel incentivised to downplay them

(38:59) 5. Work on this problem is neglected and tractable

(40:30) Technical safety approaches

(46:19) Governance and policy approaches

(48:32) What are the arguments against working on this problem?

(48:51) Maybe advanced AI systems wont pursue their own goals; theyll just be tools controlled by humans.

(51:07) Even if AI systems develop their own goals, they might not seek power to achieve them.

(54:49) If this argument is right, why arent all capable humans dangerously power-seeking?

(57:11) Maybe we wont build AIs that are smarter than humans, so we dont have to worry about them taking over.

(58:32) We might solve these problems by default anyway when trying to make AI systems useful.

(01:00:54) Powerful AI systems of the future will be so different that work today isnt useful.

(01:02:56) The problem might be extremely difficult to solve.

(01:03:38) Couldnt we just unplug an AI thats pursuing dangerous goals?

(01:05:30) Couldnt we just sandbox any potentially dangerous AI until we know its safe?

(01:07:15) A truly intelligent system would know not to do harmful things.

(01:08:33) How you can help

(01:10:47) Want one-on-one advice on pursuing this path?

(01:11:22) Learn more

(01:14:08) Acknowledgements

(01:14:25) Notes and references

The original text contained 84 footnotes which were omitted from this narration.

---

First published:
July 17th, 2025

Source:
https://80000hours.org/problem-profiles/risks-from-power-seeking-ai

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Fler avsnitt av 80,000 Hours - Narrations

Visa alla avsnitt av 80,000 Hours - Narrations

80,000 Hours - Narrations med 80000 Hours finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.