Many popular methods of model selection involve minimizing a penalized function of the data (such as the maximized log-likelihood or the residual sumof squares) over a set of models.The penalty in the criterion function is controlled by a penaltymultiplier λ which determines the properties of the procedure. In this paper, we first review model selection criteria of the simple form "Loss+Penalty" and then propose studying such model selection criteria as functions of the penalty multiplier. This approach can be interpreted as exploring the stability of model selection criteria through what we call model selection curves. It leads to new insights into model selection and new proposals on how to select models. We use the bootstrap to enhance the basic model selection curve and develop convenient numerical and graphical summaries of the results. The methodology is illustrated on two data sets and supported by a small simulation. We show that the new methodology can outperform methods such as AIC and BIC which correspond to single points on a model selection curve.
- Akaike Information Criterion (AIC)
- Bayesian Information Criterion (BIC)
- Generalized Information Criterion (GIC)
- linear regression
- model selection
- model selection curves