LessWrong (30+ Karma)

“Circuits in Superposition 2: Now with Less Wrong Math” by Linda Linsefors, Lucius Bushnaq

38 min • 30 juni 2025

Audio note: this article contains 323 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

Summary & Motivation

This post is a continuation and clarification of Circuits in Superposition: Compressing many small neural networks into one. That post presented a sketch of a general mathematical framework for compressing different circuits into a network in superposition. On closer inspection, some of it turned out to be wrong, though. The error propagation calculations for networks with multiple layers were incorrect. With the framework used in that post, the errors blow up too much over multiple layers.

This post presents a slightly changed construction that fixes those problems, and improves on the original construction in some other ways as well.[1]

By computation in superposition we mean that a network represents features in superposition and [...]

---

Outline:

(00:25) Summary & Motivation

(01:43) Takeaways

(02:32) The number of circuits we can fit in scales linearly with the number of network parameters

(04:02) Each circuit will only use a small subset of neurons in the larger network

(04:37) Implications for experiments on computation in superposition

(05:15) Reality really does have a surprising amount of detail

(06:25) Construction

(07:25) Assumptions

(08:44) Embedding the circuits into the network

(10:40) Layer 0

(11:49) Constructing the Embedding and Unembedding matrices

(12:38) Requirements

(14:30) Step 1

(15:08) Step 2

(17:02) Step 3

(17:23) Step 4

(17:50) Step 5

(18:01) Real python code

(18:14) Properties of _E_ and _U_

(18:53) Error calculation

(19:23) Defining the error terms

(22:08) _\\mathring{\\epsilon}_t^l_ - The embedding overlap error

(23:36) _\\tilde{\\epsilon}_t^l_ - The propagation error

(24:38) Calculation:

(27:29) _\\ddot{\\epsilon}_t^l_ - The ReLU activation error

(27:45) Calculations:

(29:34) _\\epsilon_t^l_ - Adding up all the errors

(29:43) Layer 0

(29:55) Layer 1

(30:10) Layer 2

(30:45) Layer 3

(31:03) Worst-case errors vs mean square errors

(32:24) Summary:

(33:12) Discussion

(33:15) Noise correction/suppression is necessary

(34:30) However, we do not in general predict sparse ReLU activations for networks implementing computation in superposition

(36:03) But we do tentatively predict that circuits only use small subsets of network neurons

(37:11) Acknowledgements

The original text contained 24 footnotes which were omitted from this narration.

---

First published:
June 30th, 2025

Source:
https://www.lesswrong.com/posts/FWkZYQceEzL84tNej/circuits-in-superposition-2-now-with-less-wrong-math

---

Narrated by TYPE III AUDIO.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00