LessWrong (30+ Karma)

[Linkpost] “Incorrect Baseline Evaluations Call into Question Recent LLM-RL Claims” by shash42

3 min • 30 maj 2025
This is a link post.

There has been a flurry of recent papers proposing new RL methods that claim to improve the “reasoning abilities” in language models. The most recent ones, which show improvements with random or no external rewards have led to surprise, excitement and confusion.

We analyzed 7 popular LLM RL papers (100+ to 3000+ likes, 50k+ to 500k+ views on X) including “Spurious Rewards”, “RL from 1 example”, and 3 papers exploring “Intrinsic Confidence Rewards”. We found that in most of these papers the improvements could be a mirage due to various accidental issues in the evaluation setups (discussed below). As such, the baseline numbers of the pre-RL models are massively underreported compared to official numbers in the Qwen releases, or other standardized evaluations (for example in the Sober Reasoning paper). In several cases, the post-RL model performance was actually worse than the (correctly evaluated) pre-RL baseline [...]

---

First published:
May 29th, 2025

Source:
https://www.lesswrong.com/posts/p8rcMDRwEGeFAzCQS/incorrect-baseline-evaluations-call-into-question-recent-llm

Linkpost URL:
https://safe-lip-9a8.notion.site/Incorrect-Baseline-Evaluations-Call-into-Question-Recent-LLM-RL-Claims-2012f1fbf0ee8094ab8ded1953c15a37?pvs=4

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Bar graph comparing reported versus actual accuracy gains on MATH500 across different models.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00