TLDR: I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than other alignment training (e.g. harmlessness training). I list four concrete failure modes with “training against scheming”: a) the training against scheming distribution is too narrow, b) overwhelming pressure from other goals, c) the AI exploits an imperfect reward model, and d) deceptive alignment.
I’d like to thank Alex Meinke for detailed feedback and discussion.
Motivation
There have been multiple recent results showing early evidence of scheming-related behavior (as of June 2025). For example, Greenblatt et al., 2024 demonstrates alignment faking, Meinke et al., 2024, shows in-context scheming and Baker et al., 2025 investigates situationally aware reward hacking. I’m using a broad definition of scheming, i.e. “covertly pursue misaligned goals, hiding their true capabilities and objectives” as defined [...]
---
Outline:
(00:47) Motivation
(02:03) Scheming is a result of goal conflicts and smart AIs
(04:56) Stock trader analogy
(05:52) Concrete failures for training against scheming
(06:22) The training against scheming distribution is too narrow
(07:08) Stock trader analogy
(07:24) Recommendations
(08:16) Overwhelming pressure from other goals
(10:17) Stock trader analogy
(10:51) Recommendations
(11:47) The AI exploits an imperfect reward model
(15:30) Stock trader analogy
(16:14) Recommendations
(17:12) Deceptive Alignment
(20:02) Stock trader analogy
(21:11) Recommendations
(22:52) Conclusion
---
First published:
June 24th, 2025
Source:
https://www.lesswrong.com/posts/HwuMFFnb6CaTwzhye/why-training-against-scheming-is-hard
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.