Clarifying Discrepancies in Responsiveness Between Reliable Change Indices


    loading  Checking for direct PDF access through Ovid

Abstract

ObjectiveSeveral reliable change indices (RCIs) exist to evaluate statistically significant individual change with repeated neuropsychological assessment. Yet there is little guidance on model selection and subsequent implications. Using existing test–retest norms, key parameters were systematically evaluated for influence on different RCI models.MethodNormative test–retest data for selected Wechsler Memory Scale-IV subtests were chosen based on the direction and magnitude of differential practice (inequality of test and retest variance). The influence of individual relative position compared to the normative mean was systematically manipulated to evaluate for predictable differences in responsiveness for three RCI models.ResultsWith respect to negative change, RCI McSweeny was most responsive when individual baseline scores were below the normative mean, irrespective of differential practice. When an individual score was greater than the normative mean, RCI Chelune was most responsive with lower retest variance, and RCI Maassen most responsive with greater retest variance. This pattern of results can change when test–retest reliability is excellent and there is greater retest variability. Order of responsiveness is reversed if positive change is of interest.ConclusionRCI models tend to agree when the individual approximates the normative mean at baseline and test–retest variability is equal. However, no RCI model will be universally more or less responsive across all conditions, and model selection may influence subsequent interpretation of change. Given the systematic and predictable differences between models, a more rationale choice can now be made. While a consensus on RCI model preference does not exist, we prefer the regression-based model for several reasons outlined.

    loading  Loading Related Articles