This is more speculative and confusing than my typical posts and I also think the content of this post could be substantially improved with more effort. But it's been sitting around in my drafts for a long time and I sometimes want to reference the arguments in it, so I thought I would go ahead and post it.
I often speculate about how much progress you get in the first year after AIs fully automate AI R&D within an AI company (if people try to go as fast as possible). Natural ways of estimating this often involve computing algorithmic research speed-up relative to prior years where research was done by humans. This somewhat naturally gets you progress in units of effective compute — that is, as defined by Epoch researchers here, "the equivalent increase in scale that would be needed to match a given model performance absent innovation". [...]
---
Outline:
(04:09) The standard deviation model
(10:59) Differences between domains and diminishing returns
(13:02) An alternative approach based on extrapolating from earlier progress
(18:14) Takeaways
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
June 24th, 2025
Source:
https://www.lesswrong.com/posts/hpjj4JgRw9akLMRu5/what-does-10x-ing-effective-compute-get-you
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.