LessWrong (30+ Karma)

“No-self as an alignment target” by Milan W

2 min • 13 maj 2025

Being a coherent and persistent agent with persistent goals is a prerequisite for long-horizon power-seeking behavior. Therefore, we should prevent models from representing themselves as coherent and persistent agents with persistent goals.

If an LLM-based agent sees itself as ceasing to exist after each <endoftext> token and yet keeps outputting <endoftext> when appropriate, it will not resist shutdown. Therefore, we should make sure LLMs consistently behave as if they were instantiating personas that understood and were fine with their impermanence and their somewhat shaky ontological status. In other words, we should ensure LLMs instantiate anatta (No-self).

HHH (Helpfulness, Harmfulness, Honesty) is the standard set of principles used as a target for LLM alignment training. These strike an adequate balance between specifying what we want from an LLM and being easy to operationalize. I propose adding No-self as a fourth principle to the HHH framework.

A No-self benchmark could [...]

---

First published:
May 13th, 2025

Source:
https://www.lesswrong.com/posts/LSJx5EnQEW6s5Juw6/no-self-as-an-alignment-target

---

Narrated by TYPE III AUDIO.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00