LessWrong (30+ Karma)

“45 - Samuel Albanie on DeepMind’s AGI Safety Approach” by DanielFilan

77 min • 7 juli 2025

YouTube link

In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called “An Approach to Technical AGI Safety and Security”. It covers the assumptions made by the approach, as well as the types of mitigations it outlines.

Topics we discuss:

  • DeepMind's Approach to Technical AGI Safety and Security
  • Current paradigm continuation
  • No human ceiling
  • Uncertain timelines
  • Approximate continuity and the potential for accelerating capability improvement
  • Misuse and misalignment
  • Societal readiness
  • Misuse mitigations
  • Misalignment mitigations
  • Samuel's thinking about technical AGI safety
  • Following Samuel's work

Daniel Filan (00:00:09): Hello, everybody. In this episode, I’ll be speaking with Samuel Albanie, a research scientist at Google DeepMind, who was previously an assistant professor working on computer vision. The description of this episode has links and timestamps for your enjoyment, and a [...]

---

Outline:

(01:42) DeepMind's Approach to Technical AGI Safety and Security

(05:51) Current paradigm continuation

(20:30) No human ceiling

(22:38) Uncertain timelines

(24:47) Approximate continuity and the potential for accelerating capability improvement

(34:48) Misuse and misalignment

(39:26) Societal readiness

(43:25) Misuse mitigations

(52:00) Misalignment mitigations

(01:05:16) Samuel's thinking about technical AGI safety

(01:14:31) Following Samuel's work

---

First published:
July 6th, 2025

Source:
https://www.lesswrong.com/posts/nZtAkGmDELMnLJMQ5/45-samuel-albanie-on-deepmind-s-agi-safety-approach

---

Narrated by TYPE III AUDIO.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00