This sequence draws from a position paper co-written with Simon Pepin Lehalleur, Jesse Hoogland, Matthew Farrugia-Roberts, Susan Wei, Alexander Gietelink Oldenziel, Stan van Wingerden, George Wang, Zach Furman, Liam Carroll, Daniel Murfet. Thank you to Stan, Dan, and Simon for providing feedback on this post.
Alignment <span>_subseteq_</span> Capabilities. As of 2025, there is essentially no difference between the methods we use to align models and the methods we use to make models more capable. Everything is based on deep learning, and the main distinguishing factor is the choice of training data. So, the question is: what is the right data?
Figure 1: Data differentiates alignment from capabilities. Deep learning involves three basic inputs: (1) the architecture (+ loss function), (2) the optimizer, and (3) the training data. Of these, the training data is the main variable that distinguishes alignment from capabilities.Alignment is data engineering. Alignment training data specifies [...]
---
First published:
July 1st, 2025
Source:
https://www.lesswrong.com/posts/J7CyENFYXPxXQpsnD/slt-for-ai-safety
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
En liten tjänst av I'm With Friends. Finns även på engelska.