It's currently possible to (mostly or fully) cheaply reproduce the performance of a model by training another (initially weaker) model to imitate the stronger model's outputs.[1] I'll refer to this as distillation. In the case of RL, distilling the learned capabilities is much, much cheaper than the RL itself (especially if you are distilling back into the original base model). But even for pre-training, distilling is cheaper than the original training.[2]
In this post, I'll discuss how we could utilize distillation to potentially remove (or possibly detect) misalignment. I'll also discuss a few other applications.[3] My overall take is that techniques utilizing distillation are mildly to moderately promising and the low cost of distillation might make them surprisingly viable, but it's quite tricky to reason about how effective these techniques are.
Distilling to remove misalignment
I'll assume that we have a powerful model[4] that we're worried [...]
---
Outline:
(01:01) Distilling to remove misalignment
(11:16) Detecting problematic actions using (distillation) training
(15:01) How does distillation interact with neuralese?
(15:57) Other techniques leveraging distillation
(16:01) Distillation as a (non-mechanistic) interpretability technique
(17:13) Distillation for precise capability and knowledge control
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
June 19th, 2025
Source:
https://www.lesswrong.com/posts/8KKujApx4g7FBm6hE/ai-safety-techniques-leveraging-distillation
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.