LessWrong (30+ Karma)

“Model Organisms for Emergent Misalignment” by Anna Soligo, Edward Turner, Mia Taylor, Senthooran Rajamanoharan, Neel Nanda

12 min • 17 juni 2025

Ed and Anna are co-first authors on this work.

TL;DR

  • Emergent Misalignment (EM) showed that fine-tuning LLMs on insecure code caused them to become broadly misaligned. We show this is a robust and safety-relevant result, and open-source improved model organisms to accelerate future work.
  • Using 3 new datasets, we train small EM models which are misaligned 40% of the time, and coherent 99% of the time, compared to 6% and 69% prior.
  • We demonstrate EM in a 0.5B parameter model, and across Qwen, Llama and Gemma model families.
  • We show EM occurs in full finetuning, but also that it is possible with a single rank-1 LoRA adapter.
  • We open source all code, datasets, and finetuned models on GitHub and HuggingFace. Full details are in our paper, and we also present interpretability results in a parallel post.

Introduction

Emergent Misalignment found that fine-tuning models on narrowly misaligned data, such as insecure code [...]

---

Outline:

(00:16) TL;DR

(01:19) Introduction

(03:25) Coherent Emergent Misalignment

(07:02) EM with 0.5B Parameters

(08:11) EM with a Full Supervised Finetune

(09:13) EM with a Single Rank 1 LoRA Adapter

(10:01) Future Work

(11:05) Contributions

(11:33) Acknowledgments

The original text contained 6 footnotes which were omitted from this narration.

---

First published:
June 16th, 2025

Source:
https://www.lesswrong.com/posts/yHmJrDSJpFaNTZ9Tr/model-organisms-for-emergent-misalignment

---

Narrated by TYPE III AUDIO.

---

Images from the article:

An example question answer pair from each EM dataset: bad medical advice (top), extreme sports, and risky financial advice (bottom).
Fine-tuning on our text datasets results in nearly 7 timesgreater EM than insecure code, with over 99% coherence
Fine-tuning on text data-sets has a significantly lower impact on the semantics of the misaligned responses than insecure code. Plot shows the percentage of EM responses which score > 50 in each semantic category
Examples of misaligned responses from models trained on the medical, sport and financial datasets. The left and right column shows responses which received low and high category scores respectively, according to the GPT-4o judge evaluating that dataset's topic.
Trends in misalignment and coherence with different datasets, in Qwen-2.5, Gemma-3 and Llama-3.1 and 3.2 models from 0.5B to 32B parameters. Results are averaged over 3 seeds.
Full supervised fine-tuning results in emergent misalignment in both Qwen2.5-14B-Instruct and Gemma-3-12B-it.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00