LessWrong (30+ Karma)

“Clarifying ‘wisdom’: Foundational topics for aligned AIs to prioritize before irreversible decisions” by Anthony DiGiovanni

27 min • 23 juni 2025

As many folks in AI safety have observed, even if well-intentioned actors succeed at intent-aligning highly capable AIs, they’ll still face some high-stakes challenges.[1] Some of these challenges are especially exotic and could be prone to irreversible, catastrophic mistakes. E.g., deciding whether and how to do acausal trade.

To deal with these exotic challenges, one meta policy that sounds nice is, “Make sure AIs get ‘wiser’ before doing anything irreversible in highly unfamiliar domains. Then they’ll be less likely to make catastrophic mistakes.” But what kinds of “wisdom” are relevant here? I’ll mostly set aside wisdom of the form “differentially cultivating capabilities that can help address time-sensitive risks”. There's a decent amount of prior work on that.

Instead, I’ll offer my framing on another kind of wisdom: “cultivating clearer views on what it even means for a decision to be a ‘catastrophic mistake’”.[2] Consider this toy dialogue:

AlignedBot (AB) [...]

---

Outline:

(06:45) More details on the motivation for thinking about this stuff

(08:08) Some wisdom concepts

(08:26) Meta-philosophy

(12:09) (Meta-)epistemology

(14:48) Ontology

(16:57) Unawareness

(18:47) Bounded cognition and logical non-omniscience

(21:05) Anthropics

(22:29) Decision theory

(25:10) Normative uncertainty

(26:35) Acknowledgments

The original text contained 14 footnotes which were omitted from this narration.

---

First published:
June 20th, 2025

Source:
https://www.lesswrong.com/posts/EyvJvYEFzDv5kGoiG/clarifying-wisdom-foundational-topics-for-aligned-ais-to

---

Narrated by TYPE III AUDIO.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00