How to proxy the unmodellable: Analysing granular insurance claims in the presence of unobservable or complex drivers

Benjamin Avanzi, Greg Taylor, Bernard Wong, Alan Xian*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

The estimation of claim and premium liabilities is a key component of an actuary’s role and plays a vital part of any insurance companys operations. In practice, such calculations are complicated by the stochastic nature of the claims process as well as the impracticality of capturing all relevant and material drivers of the observed claims data. In the past, computational limitations have promoted the prevalence of simplified (but possibly sub-optimal) aggregate methodologies. However, in light of modern advances in processing power, it is viable to increase the granularity at which we analyse insurance data sets so that potentially useful information is not discarded. By utilising more granular and detailed data (that is usually readily available to insurers), model predictions may become more accurate and precise.

Unfortunately, detailed analysis of large insurance data sets in this manner poses some unique challenges. Firstly, there is no standard framework to which practitioners can refer and it can be challenging to tractably integrate all modelled components into one comprehensive model. Secondly, analysis at greater granularity or level of detail requires more intense levels of scrutiny as complex trends and drivers that were previously masked by aggregation and discretisation assumptions may emerge. This is particularly an issue with claim drivers that are either unobservable to the modeller or very difficult/expensive to model. Finally, computation times are a material concern when processing such large volumes of data as model outputs
need to be obtained in reasonable time-frames.

Our proposed methodology overcomes the above problems by using a Markov-modulated non-homogeneous Poisson process framework. This extends the standard Poisson model by allowing for over-dispersion to be captured in an interpretable, structural manner. The approach implements a flexible exposure measure to explicitly allow for known/modelled claim drivers while the hidden component of the Hidden Markov model captures the impact of unobservable or practicably non-modellable information. Computational developments are made to drastically reduce calibration times. Theoretical findings are illustrated and validated in an empirical case study using Australian general insurance data in order to highlight the benefits of the proposed approach.
Original languageEnglish
Title of host publicationGeneral Insurance Seminar
Subtitle of host publicationTransform the Future
PublisherInstitute of Actuaries of Australia
Number of pages43
Publication statusPublished - 12 Nov 2018
Externally publishedYes

Fingerprint

Dive into the research topics of 'How to proxy the unmodellable: Analysing granular insurance claims in the presence of unobservable or complex drivers'. Together they form a unique fingerprint.

Cite this