On model selection curves

Samuel Müller*, Alan H. Welsh

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

Many popular methods of model selection involve minimizing a penalized function of the data (such as the maximized log-likelihood or the residual sumof squares) over a set of models.The penalty in the criterion function is controlled by a penaltymultiplier λ which determines the properties of the procedure. In this paper, we first review model selection criteria of the simple form "Loss+Penalty" and then propose studying such model selection criteria as functions of the penalty multiplier. This approach can be interpreted as exploring the stability of model selection criteria through what we call model selection curves. It leads to new insights into model selection and new proposals on how to select models. We use the bootstrap to enhance the basic model selection curve and develop convenient numerical and graphical summaries of the results. The methodology is illustrated on two data sets and supported by a small simulation. We show that the new methodology can outperform methods such as AIC and BIC which correspond to single points on a model selection curve.

Original languageEnglish
Pages (from-to)240-256
Number of pages17
JournalInternational Statistical Review
Volume78
Issue number2
DOIs
Publication statusPublished - Aug 2010
Externally publishedYes

Keywords

  • Akaike Information Criterion (AIC)
  • Bayesian Information Criterion (BIC)
  • Generalized Information Criterion (GIC)
  • linear regression
  • model selection
  • model selection curves

Fingerprint

Dive into the research topics of 'On model selection curves'. Together they form a unique fingerprint.

Cite this