Selection bias in the reported performances of AD classification pipelines

Alex F. Mendelson*, Maria A. Zuluaga, Marco Lorenzi, Brian F. Hutton, Sébastien Ourselin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

26 Citations (Scopus)
29 Downloads (Pure)


The last decade has seen a great proliferation of supervised learning pipelines for individual diagnosis and prognosis in Alzheimer's disease. As more pipelines are developed and evaluated in the search for greater performance, only those results that are relatively impressive will be selected for publication. We present an empirical study to evaluate the potential for optimistic bias in classification performance results as a result of this selection. This is achieved using a novel, resampling-based experiment design that effectively simulates the optimisation of pipeline specifications by individuals or collectives of researchers using cross validation with limited data. Our findings indicate that bias can plausibly account for an appreciable fraction (often greater than half) of the apparent performance improvement associated with the pipeline optimisation, particularly in small samples. We discuss the consistency of our findings with patterns observed in the literature and consider strategies for bias reduction and mitigation.

Original languageEnglish
Pages (from-to)400-416
Number of pages17
JournalNeuroImage: Clinical
Publication statusPublished - 2017
Externally publishedYes


  • Alzheimer's disease
  • classification
  • cross validation
  • selection bias
  • overfitting
  • ADNI


Dive into the research topics of 'Selection bias in the reported performances of AD classification pipelines'. Together they form a unique fingerprint.

Cite this