In the context of model-based RL agents in general, and brain-like AGI in particular, part of the source code is a reward function. The programmers get to put whatever code they want into the reward function slot, and this decision will have an outsized effect on what the AGI winds up wanting to do.
One thing you could do is: hook up the reward function to a physical button on a wall. And then you’ll generally (see caveats below) get an AGI that wants you to press the button.
A nice intuition is addictive drugs. If you give a person or animal a few hits of an addictive drug, then they’re going to want it again. The reward button is just like that—it's an addictive drug for the AGI.
And then what? Easy, just tell the AGI, “Hey AGI, I’ll press the button if you do (blah)”—write a working [...]
---
Outline:
(02:20) 1. Can we actually get reward button alignment, or will there be inner misalignment issues?
(04:29) 2. Seriously? Everyone keeps saying that inner alignment is hard. Why are you so optimistic here?
(05:57) 3. Can we extract any useful work from a reward-button-aligned AGI?
(07:56) 4. Can reward button alignment be used as the first stage of a bootstrapping / AI control-ish plan?
(10:13) 5. Wouldn't it require zillions of repetitions to train the AGI to seek reward button presses?
(11:16) 6. If the AGI subverts the setup and gets power, what would it actually want to do with that power?
(13:02) 7. How far can this kind of plan go before the setup gets subverted? E.g. can we secure the reward button somehow?
(18:08) 8. If this is a bad plan, what might people do, and what should people do, to make it better?
(20:24) 9. Doesn't this constitute cruelty towards the AGI?
---
First published:
May 22nd, 2025
Source:
https://www.lesswrong.com/posts/JrTk2pbqp7BFwPAKw/reward-button-alignment
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.