LessWrong (30+ Karma)

“How Does Time Horizon Vary Across Domains?” by Thomas Kwa

37 min • 14 juli 2025

Summary

In the paper Measuring AI Ability to Complete Long Software Tasks (Kwa & West et al. 2025), METR defined an AI model's 50% time horizon as the length of tasks (measured by how long they take human professionals) that it can complete autonomously with 50% probability. We estimated the time horizon of frontier models released since 2019 on a benchmark combining three sets of software and research tasks ranging from 1 second to 16 hours in length-for-humans (HCAST, RE-Bench, and SWAA, henceforth METR-HRS). METR found that the time horizon has doubled every 7 months, possibly accelerating to every 4 months in 2024.

One important limitation was the task domain: all involved software engineering or research, but AI capabilities are known to vary greatly between different task types.[1] Here we explore whether similar trends apply to different task distributions including self-driving and agentic computer use, using a methodology that [...]

---

Outline:

(00:10) Summary

(03:48) Methodology

(07:31) Benchmarks

(08:58) Results

(09:01) Trends on other domains

(09:58) Main takeaway

(10:27) More observations

(13:23) Soundness of the time horizon metric

(18:22) Video length is a poor predictor of difficulty

(21:16) SWE-Lancer task value is a poor predictor of difficulty

(22:25) Robustness checks

(24:23) Limitations and future work

(27:22) Conclusion

(28:50) Appendix

(28:53) Potential future experiments

(30:06) Details for individual benchmarks

(35:48) Other plots

(35:52) Raw data

---

First published:
July 14th, 2025

Source:
https://www.lesswrong.com/posts/6KcP7tEe5hgvHbrSF/how-does-time-horizon-vary-across-domains

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Scatter plot showing model performance correlation with task length across different benchmarks.
Graph showing
Line graph titled
Scatter plot showing task lengths across 10 different benchmarks, with yellow stars indicating best performance.
Bar graph showing reasoning effort across different task price ranges.

The graph displays three levels of reasoning effort (Low, Medium, High) compared across four price ranges: <$500, $500-1K, $1K-2K, and $2K+. The vertical axis shows percentages up to 100%.
Four graphs showing correlations between model success rate and task length. Each graph displays different AI models' performance over time periods, with varying success rates plotted against task duration in minutes.
Scatter plot showing correlation between model performance and task length across different benchmarks.
Graph showing AI task completion times across different benchmarks (2019-2026).

The plot tracks various AI benchmarks like METR-HRS, Tesla FSD, and others, showing increasing time horizons required for task completion at 50% success rate.
Four graphs analyzing GPT-4-mini success rates versus clue length on CG-Bench.

The visualizations show the distribution of clue lengths, success rates across different time bins, clue length by outcome comparison, and a trend analysis with a slight negative slope.
Graph showing success probability versus task length, with time horizon indicator.
Multiple scatter plots showing
Graph showing

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Senaste avsnitt

Podcastbild

00:00 -00:00
00:00 -00:00