LessWrong (30+ Karma)

“Towards Alignment Auditing as a Numbers-Go-Up Science” by Sam Marks

18 min • 4 augusti 2025

Thanks to Rowan Wang and Buck Shlegeris for feedback on a draft.

What is the job of an alignment auditing researcher? In this post, I propose the following answer: to build tools which increase auditing agent performance, as measured on a suite of standard auditing testbeds.

A major challenge in alignment research is that we don’t know what we’re doing or how to tell if it's working.

Contrast this with other fields of ML research, like architecture design, performance optimization, or RL algorithms. When I talk to researchers in these areas, there's no confusion. They know exactly what they’re trying to do and how to tell if it's working:[1]

  • Architecture design: Build architectures that decrease perplexity when doing compute-optimal pre-training.
  • Performance optimization: Make AI systems run as efficiently as possible, as measured by throughput, latency, etc.
  • RL algorithms: Develop RL algorithms that train models to attain higher mean reward [...]

The original text contained 5 footnotes which were omitted from this narration.

---

First published:
August 4th, 2025

Source:
https://www.lesswrong.com/posts/bGYQgBPEyHidnZCdE/towards-alignment-auditing-as-a-numbers-go-up-science

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Interpretability tools and the “semantic search” tool (which allows the agent to do flexible searches over training samples) seem to improve performance the most. I think this might be the strongest empirical evidence yet that interpretability has made progress towards being useful for alignment (though see caveats in our post).

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00