This week is an insightful discussion with Claudia Perlich about some situations in machine learning where models can be built, perhaps by well-intentioned practitioners, to appear to be highly predictive despite being trained on random data. Our discussion covers some novel observations about ROC and AUC, as well as an informative discussion of leakage.
Much of our discussion is inspired by two excellent papers Claudia authored: Leakage in Data Mining: Formulation, Detection, and Avoidance and On Cross Validation and Stacking: Building Seemingly Predictive Models on Random Data. Both are highly recommended reading!
Fler avsnitt av Data Skeptic
Visa alla avsnitt av Data SkepticData Skeptic med Kyle Polich finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
