My guest, John Alioto, is an independent AI engineer with a computer science degree from UC Berkeley and 25 years building real-world AI systems at companies like Microsoft and Google.
In the wake of an attack on Sam Altman’s property, John Alioto came on the show to argue that Yudkowsky’s words are violent rhetoric that helped create this moment. Since I completely disagree with that characterization, we had a lot of fuel for a passionate debate.
For the record, here’s my position on why AI doomers are NOT “calling for violence”:
Are we acting like we actually think there’s an urgent extinction risk? Yes.
Are we calling for lawless violence? Absolutely not. At least not me, or the leaders of the movement, or anyone I’ve ever personally interacted with.
Are we calling for violence as a last resort if a government policy has been established and then egregiously violated? Yes… but that’s just standard for any governance proposal! A proposal for a strictly enforced treaty isn’t a call for violence — it’s a call for doing everything we can to make sure no one breaks the treaty, with zero violence, unless rogue actors decide to break the treaty and bring the consequences on themselves.
Thanks to John for having an extremely respectful and good-faith debate on this heated subject.
Timestamps
00:00:00 — Cold Open
00:00:37 — Introducing John Alioto
00:03:02 — Setting the Stage: Recent Acts of Violence & AI Discourse
00:05:53 — Eliezer Yudkowsky's 2023 TIME Article
00:11:16 — John's Two-Part Argument
00:14:37 — Conditional on High P(Doom), Is Eliezer's Policy Bad?
00:17:46 — Be Like Carl Sagan — Win in the Arena of Ideas
00:21:12 — No Carve-Outs for Non-Signatories
00:26:15 — Hypothetical: What If the UN Voted for a Treaty?
00:30:46 — What's the Correct Interpretation of Eliezer's TIME Article?
00:32:42 — Liron's Interpretation: Same Structure as Any International Law Proposal
00:42:23 — What Should Eliezer Have Written? "Airstrikes" vs "Strong Deterrent"
00:49:57 — How John Would Rewrite the TIME Piece
00:50:54 — Carve-Outs: Allies, Civilians, Consequences
00:52:52 — Debate Wrap-Up
00:56:27 — Last Q: Does High P(Doom) Imply Violence?
00:59:40 — Closing Thoughts
Links
John P. Alioto on X (Twitter) — https://x.com/jpalioto
Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down” (Time Magazine, March 2023) — https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏
Get full access to Doom Debates at lironshapira.substack.com/subscribe
Fler avsnitt av Doom Debates!
Visa alla avsnitt av Doom Debates!Doom Debates! med Liron Shapira finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
