Optimising the design of intervention studies: critiques and ways forward

David Howard*, Wendy Best, Lyndsey Nickels

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    72 Citations (Scopus)


    Background: There is a growing body of research that evaluates interventions for neuropsychological impairments using single-case experimental designs and diversity of designs and analyses employed. Aims: This paper has two goals: first, to increase awareness and understanding of the limitations of therapy study designs and statistical techniques and, second, to suggest some designs and statistical techniques likely to produce intervention studies that can inform both theories of therapy and service provision. Main Contribution & Conclusions: We recommend a single-case experimental design that incorporates the following features. First, there should be random allocation of stimuli to treated and control conditions with matching for baseline performance, using relatively large stimulus sets to increase confidence in the data. Second, prior to intervention, baseline testing should occur on at least two occasions. Simulations show that termination of the baseline phase should not be contingent on “stability.” For intervention, a predetermined number of sessions are required (rather than performance-determined duration). Finally, treatment effects must be significantly better than expected by chance to be confident that the results reflect change greater than random variation. Appropriate statistical analysis is important: by-item statistical analysis methods are strongly recommended and a methodology is presented using WEighted STatistics (WEST).

    Original languageEnglish
    Pages (from-to)526-562
    Number of pages37
    Issue number5
    Publication statusPublished - 4 May 2015


    Dive into the research topics of 'Optimising the design of intervention studies: critiques and ways forward'. Together they form a unique fingerprint.

    Cite this