A year or so ago, I thought that 10x speedups in safety research would require AIs so capable that takeover risks would be very high. 2-3x gains seemed plausible, 10x seemed unlikely. I no longer think this.
What changed? I continue to think that AI x-risk research is predominantly bottlenecked on good ideas, but I suffered from a failure of imagination of the speedups that could be gained from AIs that are unable to produce great high-level ideas. I’ve realised that humans trying to get empirical feedback from their ideas waste a huge amount of thought cycles on tasks that could be done by just moderately capable AI agents.
I don’t expect this to be an unpopular position, but I thought it might be useful to share some details of how I see this speedup happening in my current research.
Tooling alone could 3-5x progress
If we stopped frontier [...]
---
Outline:
(00:59) Tooling alone could 3-5x progress
(03:52) Going from 3-5x to 10x speedup
(04:25) My takeaways
The original text contained 1 footnote which was omitted from this narration.
---
First published:
June 29th, 2025
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.