LessWrong (30+ Karma)

“Simplex Progress Report - July 2025” by Adam Shai, Paul Riechers, hrbigelow, Eric Alt, mntss

33 min • 29 juli 2025

Thanks to Jasmina Urdshals, Xavier Poncini, and Justis Mills for comments.

Introduction

At Simplex our mission is to develop a principled science of the representations and emergent behaviors of AI systems. Our initial work showed that transformers linearly represent belief state geometries in their residual streams. We think of that work as providing the first steps into an understanding of what fundamentally we are training AI systems to do, and what representations we are training them to have.

Since that time, we have used that framework to make progress in a number of directions, which we will present in the sections below. The projects ask, and provide answers to, the following questions:

  1. How, mechanistically, do transformers use attention to geometrically arrange their activations according to the belief geometry?
  2. What is the nature of in-distribution in-context learning (ICL), and how does it relate to structure in the training [...]

---

Outline:

(00:19) Introduction

(02:26) The foundational theory

(04:58) Answers to the Questions

(05:13) 1. How, mechanistically, do transformers use attention to geometrically arrange their activations according to the belief geometry?

(05:52) 2. What is the nature of in-distribution in-context learning (ICL), and how does it relate to structure in the training data?

(06:32) 3. What is the fundamental nature of computation in neural networks? What model of computation should we be thinking about when trying to make sense of these systems?

(08:05) Completed Projects

(08:09) Constrained Belief Updates Explain Geometric Structures in Transformer Representations

(15:16) Next-Token Pretraining Implies In-Context Learning

(22:55) Neural networks leverage nominally quantum and post-quantum representations

The original text contained 8 footnotes which were omitted from this narration.

---

First published:
July 28th, 2025

Source:
https://www.lesswrong.com/posts/fhkurwqhjZopx8DKK/simplex-progress-report-july-2025

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Diagram showing Bayesian inference process with input sequence and belief states.
Diagram showing parallel attention mechanism with residual stream and value vectors.
Four graphs comparing theoretical and experimental in-context loss across different token positions and numbers of coins.
Diagram showing
Diagram and graphs showing generator models with validation loss metrics tracked across different parameters. The top section shows network diagrams, while the bottom displays four different performance graphs measuring various aspects of training progress and token relationships.
A diagram showing data structure, theoretical prediction, and residual stream geometry
Mathematical fractal patterns showing triangular and branching structures with different parameters.

The image displays several fractal designs in a grid format, with
Geometric fractal patterns showing theoretical and activation states with varying parameters.
Three visualization diagrams labeled Mess3, Bloch Walk, and Moon processes.
Visual comparison showing three processes (Mess3, Bloch Walk, Moon) across different models. Each process displays ground truth, Transformer, and LSTM predictions, with corresponding RMSE bar graphs.
Technical diagrams showing attention mechanism computation and vector embeddings in neural networks. Includes graphs labeled A-D showing vector relationships and attention patterns.
Technical diagram showing process models, network structure, and loss graphs comparing performance. The left panel shows mathematical notations for parentheses matching and two biased coins models, the middle displays tree and circular network structures, and the right shows loss curves across token positions.
Visual progression showing neural network training, from random noise to structured pattern.
Academic diagram comparing Classical, Quantum, and Post-quantum belief paradigms with neural network model.

The image shows a detailed theoretical framework presented in four parts:
1. A comparison table of paradigm characteristics
2. Three geometric representations of belief systems
3. A neural network process flow diagram
4. A key discovery note about orthogonal states

The diagram uses color coding (red for Classical, blue for Quantum, green for Post-quantum) to distinguish between different theoretical approaches to belief representation and processing.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00