Abstract
We introduce the relatively new concept of subtractive lack-of-fit measures in the context of robust regression, in particular in generalised linear models. We devise a fast and robust feature selection framework for regression that empirically enjoys better performance than other selection methods while remaining computationally feasible when fully exhaustive methods are not. Our method builds on the concepts of model stability, subtractive lack-of-fit measures and repeated model identification. We demonstrate how the multiple implementations add value in a robust regression type context, in particular through utilizing a combination of robust regression coefficient and scale estimates. Through resampling, we construct a robust stability matrix, which contains multiple measures of feature importance for each variable. By constructing this stability matrix and using it to rank features based on importance, we are able to reduce the candidate model space and then perform an exhaustive search on the remaining models. We also introduce two different visualisations to better convey information held within the stability matrix; a subtractive Mosaic Probability Plot and a subtractive Variable Inclusion Plot. We demonstrate how these graphics allow for a better understanding of how variable importance changes under small alterations to the underlying data. Our framework is made available in R through the RobStabR package.
Original language | English |
---|---|
Pages (from-to) | 339-355 |
Number of pages | 17 |
Journal | Australian and New Zealand Journal of Statistics |
Volume | 64 |
Issue number | 3 |
Early online date | 2 Sept 2022 |
DOIs | |
Publication status | Published - Sept 2022 |
Keywords
- bootstrap model selection
- exhaustive model search
- fence methods
- robust variable selection