LessWrong (30+ Karma)

“Narrow Misalignment is Hard, Emergent Misalignment is Easy” by Edward Turner, Anna Soligo, Senthooran Rajamanoharan, Neel Nanda

11 min • 14 juli 2025

Anna and Ed are co-first authors for this work. We’re presenting these results as a research update for a continuing body of work, which we hope will be interesting and useful for others working on related topics.

TL;DR

  • We investigate why models become misaligned in diverse contexts when fine-tuned on narrow harmful datasets (emergent misalignment), rather than learning the specific narrow task.
  • We successfully train narrowly misaligned models using KL regularization to preserve behavior in other domains. These models give bad medical advice, but do not respond in a misaligned manner to general non-medical questions.
  • We use this method to train narrowly misaligned steering vectors, rank 1 LoRA adapters and rank 32 LoRA adapters, and compare these to their generally misaligned counterparts.
    • The steering vectors are particularly interpretable, we introduce Training Lens as a tool for analysing the revealed residual stream geometry.
  • The general misalignment solution is consistently more [...]

---

Outline:

(00:27) TL;DR

(02:03) Introduction

(04:03) Training a Narrowly Misaligned Model

(07:13) Measuring Stability and Efficiency

(10:00) Conclusion

The original text contained 7 footnotes which were omitted from this narration.

---

First published:
July 14th, 2025

Source:
https://www.lesswrong.com/posts/gLDSqQm8pwNiq7qst/narrow-misalignment-is-hard-emergent-misalignment-is-easy

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Plots taken from Training LensThere is an interesting structure in the principal components of the steering vector training trajectories. Increasing the KL penalisation term mainly corresponds to suppressing PC1, with KL=1e6 producing the most effective narrow model. However, we find these 3 PCs (79% variance) are insufficient to replicate the misaligned behaviour, so we can't simply label them as narrow and general misalignment directions.
The general and narrow misaligned percentages for the steering vector, rank 1 LoRA and rank 32 LoRA setups. Here all 'narrow' vectors come from training with a high weight KL divergence regularisation and the 'general' vectors come from the normal unregularised training.
The loss on the bad medical advice dataset when scaling the narrow and general steering vectors (left) rank 1 LoRAs (centre) and rank 32 LoRAs (right) shows the better performance and efficiency of the general solution. The narrow LoRAs shown are trained for 3 epochs to give a comparable 'bad medical advice' behaviour to the general LoRAs, which are trained for a single epoch.
The relative increase in loss when orthogonal noise is added to the LoRA matrices and steering vectors in consistently higher for the narrow solutions at any given noise level. Plot shows the mean and standard deviation over 10 seeds.
Plots taken from Training LensThe steering vector training trajectories cleanly illustrate the direction reverting (red) towards the general solution when the KL regularisation is removed and we continue training.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00