A review of measurement practice in studies of clinical decision support systems 1998-2017

Philip J. Scott, Angela W. Brown, Taiwo Adedeji, Jeremy C. Wyatt, Andrew Georgiou, Eric L. Eisenstein, Charles P. Friedman

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Objective: To assess measurement practice in clinical decision support evaluation studies. Materials and methods: We identified empirical studies evaluating clinical decision support systems published from 1998 to 2017. We reviewed titles, abstracts, and full paper contents for evidence of attention to measurement validity, reliability, or reuse. We used Friedman and Wyatt's typology to categorize the studies. Results: There were 391 studies that met the inclusion criteria. Study types in this cohort were primarily field user effect studies (n = 210) or problem impact studies (n = 150). Of those, 280 studies (72%) had no evidence of attention to measurement methodology, and 111 (28%) had some evidence with 33 (8%) offering validity evidence; 45 (12%) offering reliability evidence; and 61 (16%) reporting measurement artefact reuse. Discussion: Only 5 studies offered validity assessment within the study. Valid measures were predominantly observed in problem impact studies with the majority of measures being clinical or patient reported outcomes with validity measured elsewhere. Conclusion: Measurement methodology is frequently ignored in empirical studies of clinical decision support systems and particularly so in field user effect studies. Authors may in fact be attending to measurement considerations and not reporting this or employing methods of unknown validity and reliability in their studies. In the latter case, reported study results may be biased and effect sizes misleading. We argue that replication studies to strengthen the evidence base require greater attention to measurement practice in health informatics research.

LanguageEnglish
Pages1120-1128
Number of pages9
JournalJournal of the American Medical Informatics Association : JAMIA
Volume26
Issue number10
DOIs
Publication statusPublished - 1 Oct 2019

Fingerprint

Clinical Decision Support Systems
Reproducibility of Results
Informatics
Artifacts
Cohort Studies
Health
Research

Bibliographical note

Copyright the Author(s) 2019. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Keywords

  • clinical decision support systems
  • health informatics
  • measurement
  • reliability
  • validity

Cite this

Scott, Philip J. ; Brown, Angela W. ; Adedeji, Taiwo ; Wyatt, Jeremy C. ; Georgiou, Andrew ; Eisenstein, Eric L. ; Friedman, Charles P. / A review of measurement practice in studies of clinical decision support systems 1998-2017. In: Journal of the American Medical Informatics Association : JAMIA. 2019 ; Vol. 26, No. 10. pp. 1120-1128.
@article{ffd7fd1664d5414888eb1ece28d2ddc9,
title = "A review of measurement practice in studies of clinical decision support systems 1998-2017",
abstract = "Objective: To assess measurement practice in clinical decision support evaluation studies. Materials and methods: We identified empirical studies evaluating clinical decision support systems published from 1998 to 2017. We reviewed titles, abstracts, and full paper contents for evidence of attention to measurement validity, reliability, or reuse. We used Friedman and Wyatt's typology to categorize the studies. Results: There were 391 studies that met the inclusion criteria. Study types in this cohort were primarily field user effect studies (n = 210) or problem impact studies (n = 150). Of those, 280 studies (72{\%}) had no evidence of attention to measurement methodology, and 111 (28{\%}) had some evidence with 33 (8{\%}) offering validity evidence; 45 (12{\%}) offering reliability evidence; and 61 (16{\%}) reporting measurement artefact reuse. Discussion: Only 5 studies offered validity assessment within the study. Valid measures were predominantly observed in problem impact studies with the majority of measures being clinical or patient reported outcomes with validity measured elsewhere. Conclusion: Measurement methodology is frequently ignored in empirical studies of clinical decision support systems and particularly so in field user effect studies. Authors may in fact be attending to measurement considerations and not reporting this or employing methods of unknown validity and reliability in their studies. In the latter case, reported study results may be biased and effect sizes misleading. We argue that replication studies to strengthen the evidence base require greater attention to measurement practice in health informatics research.",
keywords = "clinical decision support systems, health informatics, measurement, reliability, validity",
author = "Scott, {Philip J.} and Brown, {Angela W.} and Taiwo Adedeji and Wyatt, {Jeremy C.} and Andrew Georgiou and Eisenstein, {Eric L.} and Friedman, {Charles P.}",
note = "Copyright the Author(s) 2019. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.",
year = "2019",
month = "10",
day = "1",
doi = "10.1093/jamia/ocz035",
language = "English",
volume = "26",
pages = "1120--1128",
journal = "Journal of the American Medical Informatics Association : JAMIA",
issn = "1067-5027",
publisher = "Oxford University Press",
number = "10",

}

A review of measurement practice in studies of clinical decision support systems 1998-2017. / Scott, Philip J.; Brown, Angela W.; Adedeji, Taiwo; Wyatt, Jeremy C.; Georgiou, Andrew; Eisenstein, Eric L.; Friedman, Charles P.

In: Journal of the American Medical Informatics Association : JAMIA, Vol. 26, No. 10, 01.10.2019, p. 1120-1128.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - A review of measurement practice in studies of clinical decision support systems 1998-2017

AU - Scott, Philip J.

AU - Brown, Angela W.

AU - Adedeji, Taiwo

AU - Wyatt, Jeremy C.

AU - Georgiou, Andrew

AU - Eisenstein, Eric L.

AU - Friedman, Charles P.

N1 - Copyright the Author(s) 2019. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

PY - 2019/10/1

Y1 - 2019/10/1

N2 - Objective: To assess measurement practice in clinical decision support evaluation studies. Materials and methods: We identified empirical studies evaluating clinical decision support systems published from 1998 to 2017. We reviewed titles, abstracts, and full paper contents for evidence of attention to measurement validity, reliability, or reuse. We used Friedman and Wyatt's typology to categorize the studies. Results: There were 391 studies that met the inclusion criteria. Study types in this cohort were primarily field user effect studies (n = 210) or problem impact studies (n = 150). Of those, 280 studies (72%) had no evidence of attention to measurement methodology, and 111 (28%) had some evidence with 33 (8%) offering validity evidence; 45 (12%) offering reliability evidence; and 61 (16%) reporting measurement artefact reuse. Discussion: Only 5 studies offered validity assessment within the study. Valid measures were predominantly observed in problem impact studies with the majority of measures being clinical or patient reported outcomes with validity measured elsewhere. Conclusion: Measurement methodology is frequently ignored in empirical studies of clinical decision support systems and particularly so in field user effect studies. Authors may in fact be attending to measurement considerations and not reporting this or employing methods of unknown validity and reliability in their studies. In the latter case, reported study results may be biased and effect sizes misleading. We argue that replication studies to strengthen the evidence base require greater attention to measurement practice in health informatics research.

AB - Objective: To assess measurement practice in clinical decision support evaluation studies. Materials and methods: We identified empirical studies evaluating clinical decision support systems published from 1998 to 2017. We reviewed titles, abstracts, and full paper contents for evidence of attention to measurement validity, reliability, or reuse. We used Friedman and Wyatt's typology to categorize the studies. Results: There were 391 studies that met the inclusion criteria. Study types in this cohort were primarily field user effect studies (n = 210) or problem impact studies (n = 150). Of those, 280 studies (72%) had no evidence of attention to measurement methodology, and 111 (28%) had some evidence with 33 (8%) offering validity evidence; 45 (12%) offering reliability evidence; and 61 (16%) reporting measurement artefact reuse. Discussion: Only 5 studies offered validity assessment within the study. Valid measures were predominantly observed in problem impact studies with the majority of measures being clinical or patient reported outcomes with validity measured elsewhere. Conclusion: Measurement methodology is frequently ignored in empirical studies of clinical decision support systems and particularly so in field user effect studies. Authors may in fact be attending to measurement considerations and not reporting this or employing methods of unknown validity and reliability in their studies. In the latter case, reported study results may be biased and effect sizes misleading. We argue that replication studies to strengthen the evidence base require greater attention to measurement practice in health informatics research.

KW - clinical decision support systems

KW - health informatics

KW - measurement

KW - reliability

KW - validity

UR - http://www.scopus.com/inward/record.url?scp=85068464298&partnerID=8YFLogxK

U2 - 10.1093/jamia/ocz035

DO - 10.1093/jamia/ocz035

M3 - Article

VL - 26

SP - 1120

EP - 1128

JO - Journal of the American Medical Informatics Association : JAMIA

T2 - Journal of the American Medical Informatics Association : JAMIA

JF - Journal of the American Medical Informatics Association : JAMIA

SN - 1067-5027

IS - 10

ER -