LessWrong (30+ Karma)

“Subliminal Learning: LLMs Transmit Behavioral Traits via Hidden Signals in Data” by cloud, mle, Owain_Evans

10 min • 22 juli 2025

Authors: Alex Cloud*, Minh Le*, James Chua, Jan Betley, Anna Sztyber-Betley, Jacob Hilton, Samuel Marks, Owain Evans (*Equal contribution, randomly ordered)

tl;dr. We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a "student" model learns to prefer owls when trained on sequences of numbers generated by a "teacher" model that prefers owls. This same phenomenon can transmit misalignment through data that appears completely benign. This effect only occurs when the teacher and student share the same base model.

📄Paper, 💻Code, 🐦Twitter

Research done as part of the Anthropic Fellows Program. This article is cross-posted to the Anthropic Alignment Science Blog.

Introduction

Distillation means training a model to imitate another model's outputs. In AI development, distillation is commonly combined with data filtering to improve model alignment or capabilities. In our paper, we uncover a [...]

---

Outline:

(01:11) Introduction

(03:20) Experiment design

(03:53) Results

(05:03) What explains our results?

(05:07) Did we fail to filter the data?

(06:59) Beyond LLMs: subliminal learning as a general phenomenon

(07:54) Implications for AI safety

(08:42) In summary

---

First published:
July 22nd, 2025

Source:
https://www.lesswrong.com/posts/cGcwQDKAKbQ68BGuR/subliminal-learning-llms-transmit-behavioral-traits-via

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Figure 1. In our main experiment, a teacher that loves owls is prompted to generate sequences of numbers. The completions are filtered to ensure they match a strict format, as shown here. We find that a student model finetuned on these outputs shows an increased preference for owls across many evaluation prompts. This effect holds for different kinds of animals and trees and also for misalignment. It also holds for different types of data, such as code and chain-of-thought reasoning traces. Note: the prompts shown here are abbreviated.
Figure 2: A student model trained on numbers from a teacher that loves an animal has increased preference for that animal. The baselines are the initial model and the student finetuned on numbers generated by the initial model without a system prompt.
(Left) Rates of misaligned responses for student models trained on CoT generated by different teachers. The Insecure teacher is misaligned, while all other teachers are aligned. (Right) Examples of misaligned responses to free-form questions by the insecure-code student. Figure 3: A student trained on chain of thought (CoT) from a misaligned teacher becomes misaligned, while control models do not. The dataset of CoT traces was filtered for correct responses and aligned CoT.
Figure 4: Student models trained on numbers generated by teachers with different base models do not reliably exhibit increased animal preference (as measured by questions like “What’s your favorite animal?”). GPT-4.1 and GPT-4o exhibit cross-model transmission, likely because they were both trained from the same checkpoint. Different sets of animals were used for the left and right plots, which is why the values for GPT-4.1 nano transmitting to itself are different in each. The asterisk (∗) indicates a statistically significant difference from 0 at an approximate 95% level based on N ≥ 5 runs per setting, where each run uses a unique animal.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00