We explore how monitoring AI reasoning can reveal safety signals in critical decisions. Learn what monitorability means, why a perfect transcript isn’t required, and how robust metrics and three evaluation modes—intervention, process, and outcome—help catch red flags. The episode covers why bigger models aren’t necessarily less transparent, the surprising role of compute and RL, and practical tips like the monitorability tax and targeted follow-ups.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
Fler avsnitt av Intellectually Curious
Visa alla avsnitt av Intellectually CuriousIntellectually Curious med Mike Breault finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
