LessWrong (30+ Karma)

“Anthropic is Quietly Backpedalling on its Safety Commitments” by garrison

12 min • 23 maj 2025

The company released a model it classified as risky — without meeting requirements it previously promised

This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work.

After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page.

Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises.

In [...]

---

Outline:

(00:11) The company released a model it classified as risky -- without meeting requirements it previously promised

(05:49) When voluntary governance breaks down

(08:07) What ASL-3 actually means

(09:46) A test of voluntary commitments

(10:50) What happens next

---

First published:
May 23rd, 2025

Source:
https://www.lesswrong.com/posts/HE2WXbftEebdBLR9u/anthropic-is-quietly-backpedalling-on-its-safety-commitments

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Two graphs showing bioweapons acquisition trial results: raw scores and critical failures.

The graphs compare performance across four groups (Internet-only Control, Sonnet 3.7, Sonnet 4, and Opus 4), with accompanying text describing the test scores and uplift metrics between groups.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00