YouTube link
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called “An Approach to Technical AGI Safety and Security”. It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
Topics we discuss:
Daniel Filan (00:00:09): Hello, everybody. In this episode, I’ll be speaking with Samuel Albanie, a research scientist at Google DeepMind, who was previously an assistant professor working on computer vision. The description of this episode has links and timestamps for your enjoyment, and a [...]
---
Outline:
(01:42) DeepMind's Approach to Technical AGI Safety and Security
(05:51) Current paradigm continuation
(20:30) No human ceiling
(22:38) Uncertain timelines
(24:47) Approximate continuity and the potential for accelerating capability improvement
(34:48) Misuse and misalignment
(39:26) Societal readiness
(43:25) Misuse mitigations
(52:00) Misalignment mitigations
(01:05:16) Samuel's thinking about technical AGI safety
(01:14:31) Following Samuel's work
---
First published:
July 6th, 2025
---
Narrated by TYPE III AUDIO.
En liten tjänst av I'm With Friends. Finns även på engelska.