Methods: Participating modeling groups were asked to reproduce the results of two published studies using the input data described in those articles. Gaps in input data were filled with assumptions reported by the modeling groups. Goodness of fit between the results reported in the target studies and the groups’ replicated outputs was evaluated using the slope of linear regression line and the coefficient of determination (R2). After a general discussion of the results, a diabetes-specific checklist for the transparency of model input was developed.
Results: Seven groups participated in the transparency challenge. The reporting of key model input parameters in the two studies, including the baseline characteristics of simulated patients, treatment effect and treatment intensification threshold assumptions, treatment effect evolution, prediction of complications and costs data, was inadequately transparent (and often missing altogether). Not surprisingly, goodness of fit was better for the study that reported its input data with more transparency. To improve the transparency in diabetes modeling, the Diabetes Modeling Input Checklist listing the minimal input data required for reproducibility in most diabetes modeling applications was developed.
Conclusions: Transparency of diabetes model inputs is important to the reproducibility and credibility of simulation results. In the Eighth Mount Hood Challenge, the Diabetes Modeling Input Checklist was developed with the goal of improving the transparency of input data reporting and reproducibility of diabetes simulation model results.
- computer modeling
- Mount Hood Challenge