Comment on: Improving Escalation of Care

    loading  Checking for direct PDF access through Ovid

Excerpt

To the Editor:
We read with great interest the article by Johnston et al1 describing the creation and validation of the Quality of Information Transfer (QUIT) tool. The authors systematically and thoroughly explain their thoughtful development of the QUIT tool and its associated validity evidence. We applaud the authors for creating an innovative tool that assesses the quality of information transfer in surgical communication. Upon review of the manuscript, we identified 4 areas of methodological concern that should be highlighted and considered when interpreting their results and conclusions.
In their study, the authors utilize the “classical” framework of validity (which identifies 3 types of validity: content validity, criterion validity, and construct validity) to build evidence supporting the tool.2 Although this is not an unacceptable approach, they incorrectly state that this is the “most current” validity framework. Multiple authors such as Messick and Kane have described newer frameworks that are more comprehensive and practical.3–6 These frameworks approach the validity argument as a hypothesis-driven process that requires multiple sources of evidence to support or reject the interpretation of a certain psychometric instrument's results. In fact, Samuel Messick's 5 sources of evidence to support validity—content, response process, internal structure, relations to other variables, and consequences—were adopted in 1999, and reaffirmed in 2014, as the current standard by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education.7 Due to the authors’ reliance on the “classical” framework, the QUIT tool lacks sufficient support in the domains of “response process” and “consequences”—domains that are not represented in the older framework. Consequently, we would argue that the validity evidence for the QUIT tool is not as robust as the authors claim.
Second, “consequences” evidence evaluates the impact of a particular assessment on the lives of the learners or others (teachers, patients) with whom they are associated.8 For example, if using an assessment does not ultimately lead to an expected change in behavior (eg, improved quality of information transfer during escalation of care), then it has failed in its overarching purpose regardless of other aspects of the validity argument (content, reliability, relationships with other variables, etc.). The authors state that an important next step is to study the effect of the tool on error rates and avoidable adverse events.1 We agree and note that in the more modern validity frameworks advocated by Messick and Kane,3,4 “consequences” evidence would be explicitly addressed as part of a comprehensive validity argument. In the case of the QUIT tool, additional consequences evidence could be derived from, for example, measuring time supervising faculty spend assessing surgeons or nurses and remediating those with low QUIT scores, measuring change in QUIT scores after remediation, and evaluating the impact of QUIT implementation on the institution's overall training curriculum.
Third, the authors compare performance of junior and senior surgeons in an effort to portray the “construct-valid” strength of the QUIT tool. Such expert-novice comparisons are limited by confounding, and do not actually prove that score differences reflect the target characteristic of interest.9 These comparisons also inadequately represent the true cohorts that will ultimately utilize the tool and likely overestimate the reliability coefficients.9
Finally, in Table 4, the authors list the Cronbach alpha coefficient for each of the 6 categories included in the tool, as indicators of the internal consistency. We note that 2 of the categories have values >0.9 and 2 approach 0.9. Although a higher Cronbach alpha is desirable, it is well-known that values >0.9 suggest the presence of redundancies in the assessment (ie, more items than necessary), ultimately decreasing the evidence for validity.
    loading  Loading Related Articles