Mortality forecasting: How far back should we look in time?

Han Li, Colin O'Hare

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
30 Downloads (Pure)


Extrapolative methods are one of the most commonly-adopted forecasting approaches in the literature on projecting future mortality rates. It can be argued that there are two types of mortality models using this approach. The first extracts patterns in age, time and cohort dimensions either in a deterministic fashion or a stochastic fashion. The second uses non-parametric smoothing techniques to model mortality and thus has no explicit constraints placed on the model. We argue that from a forecasting point of view, the main difference between the two types of models is whether they treat recent and historical information equally in the projection process. In this paper, we compare the forecasting performance of the two types of models using Great Britain male mortality data from 1950–2016. We also conduct a robustness test to see how sensitive the forecasts are to the changes in the length of historical data used to calibrate the models. The main conclusion from the study is that more recent information should be given more weight in the forecasting process as it has greater predictive power over historical information.
Original languageEnglish
Article number22
Pages (from-to)1-15
Number of pages15
Issue number1
Publication statusPublished - 22 Feb 2019

Bibliographical note

Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.


  • mortality
  • forecasting
  • nonparametric
  • robustness
  • Nonparametric
  • Mortality
  • Robustness
  • Forecasting


Dive into the research topics of 'Mortality forecasting: How far back should we look in time?'. Together they form a unique fingerprint.

Cite this