Quotation Philipp, Michel, Rusch, Thomas, Hornik, Kurt, Strobl, Carolin. 2017. Measuring the Stability of Results from Supervised Statistical Learning.




Stability is a major requirement to draw reliable conclusions when interpreting results from supervised statistical learning. In this paper, we present a general framework for assessing and comparing the stability of results, that can be used in real-world statistical learning applications or in benchmark studies. We use the framework to show that stability is a property of both the algorithm and the data-generating process. In particular, we demonstrate that unstable algorithms (such as recursive partitioning) can produce stable results when the functional form of the relationship between the predictors and the response matches the algorithm. Typical uses of the framework in practice would be to compare the stability of results generated by different candidate algorithms for a data set at hand or to assess the stability of algorithms in a benchmark study. Code to perform the stability analyses is provided in the form of an R-package.


Press 'enter' for creating the tag

Publication's profile

Status of publication Published
Affiliation WU
Type of publication Working/discussion paper, preprint
Language English
Title Measuring the Stability of Results from Supervised Statistical Learning
Year 2017
URL http://epub.wu.ac.at/id/eprint/5398


Rusch, Thomas (Details)
Hornik, Kurt (Details)
Philipp, Michel (University of Zuerich, Switzerland)
Strobl, Carolin (University of Zuerich, Switzerland)
Institute for Statistics and Mathematics IN (Details)
Competence Center for Empirical Research Methods WE (Details)
Research areas (Ă–STAT Classification 'Statistik Austria')
1162 Statistics (Details)
5509 Psychological methodology (Details)
5701 Applied statistics (Details)
5704 Social statistics (Details)
5912 Social sciences (interdisciplinary) (Details)
Google Scholar: Search