Reply to Letter: “Improving Escalation of Care
We would like to thank Dr Cook1 and colleagues for taking the time to write their article regarding our recently published study detailing the validation of a tool to assess the quality of information transfer (QUIT) during the escalation of care process in surgery.2 They have carefully and thoughtfully identified both strengths and limitations of the research. In their article, Cook et al. identify four areas of methodological concern, which they advise, should be taken into consideration.
The first of these is the use of a classical framework of validity compared with newer frameworks that they state are more current and practical. Although the use of newer validity frameworks has been endorsed by several important medical bodies, it cannot be argued that these newer approaches have, or should, completely supersede (admittedly) older, but still tried and tested methods. In fact, in Korndorffer et al's3 2010 paper on validation in the surgical education literature, 100 percent of the included studies used the classical method. It would therefore appear that it is the method currently being used by surgical educators and researchers. Furthermore, the point Cook et al make regarding the inclusion of consequences as a validity measure does not take account of the strong reliability data in the original study, which does involve delving into the intended use and misuse of measure scores.4 Their second point also surrounds consequences and we would simply state that the consideration of faculty time, whilst important, did not fit with the scope of this study and would require further evaluation from both a scientific and economic point of view.
Thirdly, Cook et al. raise concerns surrounding the use of novice-expert comparisons. They make a good point and, in an ideal world where research participants are readily and easily available (which is a situation somewhat different to the reality) we would agree. However, multiple studies continue to be published by eminent researchers using these types of comparison, these authors, like us, clearly feel the methodology has validity.5,6 In addition, the QUIT also demonstrated concurrent validity, something that Dr Cook has personally advocated.1
Finally, the high Cronbach alpha coefficients for four of the items in the QUIT are queried. It is a perfectly reasonable point surrounding the redundancy of items in psychometric scales. However, the four items mentioned (communicator identities, patient identity, plan, and information presentation) are so critical to information transfer that we did not feel they could be left out. It is always difficult to provide full immersion in the simulated scenario and we only had one patient in ours. Imagine, for example, that an intern called their chief resident about a critically ill patient but did not provide the identity of the patient. Clearly, that is an example of poor communication. The issue lies with the fact that even the most high-fidelity, immersive simulation (eg, HOSPEX)7 can never be as real as the real thing and is a limitation we all have to live with.
In summary, we are grateful to Cook et al. for making these interesting points of debate about this research and the wider limitations of simulation validity research itself. We hope that the surgical research community continues to use this, and other psychometric tools to drive the quality of ward-based care in surgery further, as we have attempted to do.