Standards, Accuracy, and Questions of Bias in Rorschach Meta-Analyses: Reply toWood, Garb, Nezworski, Lilienfeld, and Duke (2015)

    loading  Checking for direct PDF access through Ovid

Abstract

Wood, Garb, Nezworski, Lilienfeld, and Duke (2015) found our systematic review and meta-analyses of 65 Rorschach variables to be accurate and unbiased, and hence removed their previous recommendation for a moratorium on the applied use of the Rorschach. However, Wood et al. (2015) hypothesized that publication bias would exist for 4 Rorschach variables. To test this hypothesis, they replicated our meta-analyses for these 4 variables and added unpublished dissertations to the pool of articles. In the process, they used procedures that contradicted their standards and recommendations for sound Rorschach research, which consistently led to significantly lower effect sizes. In reviewing their meta-analyses, we found numerous methodological errors, data errors, and omitted studies. In contrast to their strict requirements for interrater reliability in the Rorschach meta-analyses of other researchers, they did not report interrater reliability for any of their coding and classification decisions. In addition, many of their conclusions were based on a narrative review of individual studies and post hoc analyses rather than their meta-analytic findings. Finally, we challenge their sole use of dissertations to test publication bias because (a) they failed to reconcile their conclusion that publication bias was present with the analyses we conducted showing its absence, and (b) we found numerous problems with dissertation study quality. In short, one cannot rely on the findings or the conclusions reported in Wood et al.

Related Topics

    loading  Loading Related Articles