I think a lot about the possibility of huge numbers of AI agents doing AI R&D inside an AI company (as depicted in AI 2027). I think particularly about what will happen if those AIs are scheming: coherently and carefully trying to grab power and take over the AI company, as a prelude to taking over the world. And even more particularly, I think about how we might try to mitigate the insider risk posed by these AIs, taking inspiration from traditional computer security, traditional insider threat prevention techniques, and first-principles thinking about the security opportunities posed by the differences between AIs and humans.
So to flesh out this situation, I’m imagining a situation something like AI 2027 forecasts for March 2027:
---
Outline:
(04:06) Differences between AI and human labor will incentivize massive infra changes
(06:49) Adopting new hardware will require modifying security-critical code
(07:28) Infra rewrites can get you big performance improvements
(08:13) The AI company builds new security-critical infra really fast for the sake of security
(08:59) Conclusion
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
July 4th, 2025
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
En liten tjänst av I'm With Friends. Finns även på engelska.