Thanks to Rowan Wang and Buck Shlegeris for feedback on a draft.
What is the job of an alignment auditing researcher? In this post, I propose the following answer: to build tools which increase auditing agent performance, as measured on a suite of standard auditing testbeds.
A major challenge in alignment research is that we don’t know what we’re doing or how to tell if it's working.
Contrast this with other fields of ML research, like architecture design, performance optimization, or RL algorithms. When I talk to researchers in these areas, there's no confusion. They know exactly what they’re trying to do and how to tell if it's working:[1]
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
August 4th, 2025
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
En liten tjänst av I'm With Friends. Finns även på engelska.