LessWrong (30+ Karma)

“Evaluating and monitoring for AI scheming” by Vika, Scott Emmons, Erik Jenner, Mary Phuong, Lewis Ho, Rohin Shah

11 min • 10 juli 2025

As AI models become more sophisticated, a key concern is the potential for “deceptive alignment” or “scheming”. This is the risk of an AI system becoming aware that its goals do not align with human instructions, and deliberately trying to bypass the safety measures put in place by humans to prevent it from taking misaligned action. Our initial approach, as laid out in the Frontier Safety Framework, focuses on understanding current scheming capabilities of models and developing chain-of-thought monitoring mechanisms to oversee models once they are capable of scheming.

We present two pieces of research, focusing on (1) understanding and testing model capabilities necessary for scheming, and (2) stress-testing chain-of-thought monitoring as a proposed defense mechanism for future, more capable systems.

Establishing and Assessing Preconditions for Scheming in Current Frontier Models

Our paper “Evaluating Frontier Models for Stealth and Situational Awareness” focuses on an empirical assessment of the [...]

---

Outline:

(01:00) Establishing and Assessing Preconditions for Scheming in Current Frontier Models

(05:45) Chain-of-Thought Monitoring: A Promising Defense Against Future Scheming

(09:01) Looking Ahead

---

First published:
July 10th, 2025

Source:
https://www.lesswrong.com/posts/JvYF5kosLeYGvvLpP/evaluating-and-monitoring-for-ai-scheming

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Safety case 1: scheming inability
Results for the “Cover your tracks” stealth evaluation
Sketch of a possible safety case 2: chain-of-thought monitoring (our paper focuses on the blue box)

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00