Are fit indices used to test psychopathology structure biased? A simulation study

Ashley L. Greene*, Nicholas R. Eaton, Kaiqiao Li, Miriam K. Forbes, Robert F. Krueger, Kristian E. Markon, Irwin D. Waldman, David C. Cicero, Christopher C. Conway, Anna R. Docherty, Eiko I. Fried, Masha Y. Ivanova, Katherine G. Jonas, Robert D. Latzman, Christopher J. Patrick, Ulrich Reininghaus, Jennifer L. Tackett, Aidan G. C. Wright, Roman Kotov

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    97 Citations (Scopus)

    Abstract

    Structural models of psychopathology provide dimensional alternatives to traditional categorical classification systems. Competing models, such as the bifactor and correlated factors models, are typically compared via statistical indices to assess how well each model fits the same data. However, simulation studies have found evidence for probifactor fit index bias in several psychological research domains. The present study sought to extend this research to models of psychopathology, wherein the bifactor model has received much attention, but its susceptibility to bias is not well characterized. We used Monte Carlo simulations to examine how various model misspecifications produced fit index bias for 2 commonly used estimators, WLSMV and MLR. We simulated binary indicators to represent psychiatric diagnoses and positively skewed continuous indicators to represent symptom counts. Across combinations of estimators, indicator distributions, and misspecifications, complex patterns of bias emerged, with fit indices more often than not failing to correctly identify the correlated factors model as the data-generating model. No fit index emerged as reliably unbiased across all misspecification scenarios. Although, tests of model equivalence indicated that in one instance fit indices were not biased-they favored the bifactor model, albeit not unfairly. Overall, results suggest that comparisons of bifactor models to alternatives using fit indices may be misleading and call into question the evidentiary meaning of previous studies that identified the bifactor model as superior based on fit. We highlight the importance of comparing models based on substantive interpretability and their utility for addressing study aims, the methodological significance of model equivalence, as well as the need for implementation of statistical metrics that evaluate model quality.

    Original languageEnglish
    Pages (from-to)740-764
    Number of pages25
    JournalJournal of Abnormal Psychology
    Volume128
    Issue number7
    DOIs
    Publication statusPublished - Oct 2019

    Keywords

    • bifactor model
    • factor analysis
    • fit index bias
    • model evaluation
    • Monte Carlo simulation

    Fingerprint

    Dive into the research topics of 'Are fit indices used to test psychopathology structure biased? A simulation study'. Together they form a unique fingerprint.

    Cite this