People (including me) often say that scheming models “have to act as if they were aligned”. This is an alright summary; it's accurate enough to use when talking to a lay audience. But if you want to reason precisely about threat models arising from schemers, or about countermeasures to scheming, I think it's important to make some finer-grained distinctions.
(Most of this has been said before. I’m not trying to make any points that would be surprising or novel to people who’ve thought a lot about these issues, I’m just trying to clarify something for people who haven’t thought much about them.)
In particular, there are two important and fundamentally different mechanisms that incentivize schemers to act as if they were aligned.
Training: Firstly, the AI might behave well because it's trying to avoid getting changed by training. We train AIs to have various desirable behaviors. If a [...]
---
Outline:
(03:32) Training might know what the model really thinks
(05:57) Behavioral evidence can be handled more sample-efficiently
(06:48) Why I care
---
First published:
July 2nd, 2025
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.