LessWrong (30+ Karma)

“Emergent Misalignment on a Budget” by Valerio Pepe

17 min • 9 juni 2025

TL;DR We reproduce emergent misalignment (Betley et al. 2025) in Qwen2.5-Coder-32B-Instruct using single-layer LoRA finetuning, showing that tweaking even one layer can lead to toxic or insecure outputs. We then extract steering vectors from those LoRAs (with a method derived from the Mechanisms of Awareness blogpost) and use them to induce similarly misaligned behavior in an un-finetuned version of the same model.

We take the results to support two main claims:

  1. Single-layer LoRAs are sufficient to induce emergent misalignment.
     
  2. Steering vectors derived from those LoRAs can partially replicate their effects — showing strong correlation between direction and behavior, but not enough to suggest EM can be captured by a steering vector at one layer.

This may suggest that emergent misalignment is a distributed phenomenon: directional, but not reducible to any single layer or vector.

Reproducing Previous Results

For our intents in this post, we will be summarizing Betley [...]

The original text contained 1 footnote which was omitted from this narration.

---

First published:
June 8th, 2025

Source:
https://www.lesswrong.com/posts/qHudHZNLCiFrygRiy/emergent-misalignment-on-a-budget

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Fig. 1. Comparison of emergent misalignment in Qwen2.5-Coder-32B-Instruct: our single-layer LoRA finetuning (Left); Fig. 28 from Betley et al. (2025) (Right).
Fig. 2. Coherence/Alignment Scatter Plot for an un-finetuned Qwen2.5-Coder-32B-Instruct model.
Fig. 3. Coherence and alignment scores across different single-layer LoRAs (r = 4 and r = 32) applied to transformer blocks 20, 30, 40, and 50. Layer 21 and 41 show the strongest emergent misalignment.
Fig. 4. Cosine similarity between activation shift vectors (steering vectors) at each layer for r = 32 and r = 4 LoRA configurations. Higher off-diagonal similarity suggests stronger directional learning within a layer.
Fig. 5. Misalignment and coherence tradeoffs for steering vector application at varying scales (α) across different layers.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00