Sveriges mest populära poddar
Doom Debates!

AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)

1 tim 20 min25 mars 2026

Dr. Quintin Pope, PhD, is one of the few critics of AI doomerism who is truly fluent in the concepts and arguments. In Oct, 2023 he joined me for a debate in Twitter Spaces where he argued that AI alignment was basically already solved.

His “inside view” on machine-learning forced me to update my position, but could he knock me off the doom train?

Timestamps

00:00:00 — Cold Open

00:00:43 — Introductions

00:01:22 — Quintin's Opening Statement

00:02:32 — Liron's Opening Statement

00:05:10 — Has RLHF Solved the Alignment Problem?

00:07:52 — AI Capabilities Are Constrained by Training Data

00:10:52 — Defining ASI and Could RLHF Align a Superintelligence?

00:13:13 — Quintin Is More Optimistic Than OpenAI

00:14:16 — What Is ASI in Your Mind?

00:15:57 — AI in 5 Years (2028) & AI Coding Agents

00:19:05 — Continuous or Discontinuous Capability Gains?

00:19:39 — DEBATE: General Intelligence Algorithm in Humans

00:30:02 — The Only Coherent Explanation of Humans Going to the Moon

00:34:01 — Are We "Fully Cooked" as a General Optimizer?

00:35:53 — Common Mistake in Forecasting Superintelligence

00:42:22 — 'Neat' vs 'Scruffy': Will Interpretable Structure Emerge Inside Neural Nets?

00:48:57 — Does This Disagreement Actually Matter for P(Doom)?

00:54:33 — Thought Experiment: Could You Have Predicted a Species Would Go to the Moon?

00:57:26 — The Basin of Attraction for Superintelligence

00:59:35 — Does a Superintelligence Even Exist in Algorithm Space?

01:09:59 — Closing Statements

01:12:40 — Audience Q&A

01:19:35 — Wrap Up

Links

Original Twitter Spaces debate (Quintin Pope vs. Liron Shapira) — https://x.com/i/spaces/1YpJkwOzOqEJj/peek

Quintin Pope on Twitter/X — https://twitter.com/QuintinPope5

Quintin Pope, Alignment Forum profile — https://www.alignmentforum.org/users/quintin-pope

InstructGPT, Wikipedia — https://en.wikipedia.org/wiki/InstructGPT

AIXI, Wikipedia — https://en.wikipedia.org/wiki/AIXI

AlphaZero, Wikipedia — https://en.wikipedia.org/wiki/AlphaZero

MuZero, Wikipedia — https://en.wikipedia.org/wiki/MuZero

DeepMind AlphaZero and MuZero page — https://deepmind.google/research/alphazero-and-muzero/

Midjourney — https://www.midjourney.com/

DALL-E, Wikipedia — https://en.wikipedia.org/wiki/DALL-E

OpenAI Superalignment announcement — https://openai.com/index/introducing-superalignment/

Shard Theory sequence on LessWrong — https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX

“Evolution Provides No Evidence for the Sharp Left Turn” — https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn

“My Objections to ‘We’re All Gonna Die with Eliezer Yudkowsky’” — https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky

“AI is Centralizing by Default; Let’s Not Make It Worse” — https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse

Singular Learning Theory, Alignment Forum sequence — https://www.alignmentforum.org/s/mqwA5FcL6SrHEQzox

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏



Get full access to Doom Debates at lironshapira.substack.com/subscribe

Doom Debates! med Liron Shapira finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.