Efficacy of Endobronchial Ultrasound-Transbronchial Needle Aspiration Virtual-Reality Simulator Training

    loading  Checking for direct PDF access through Ovid

Excerpt

To the Editor:
As researchers in endosonography, simulation-based training, and assessment of technical skills, we read with great interest the paper by Scarlata et al1 using a virtual reality simulator to both train and assess performance of endobronchial ultrasound (EBUS). We would like to congratulate the authors for exploring the efficacy of their training program but also direct the readers’ attention to methodological limitations that are important to consider when interpreting the study.
First, the assessment tool chosen by the authors [Endobronchial Ultrasound Skills and Tasks Assessment Tool (EBUS-STAT)] can only be used under direct observation which makes blinding of the raters virtually impossible. Nonblinded raters introduce bias in the ratings and limit the evidence of validity to the lowest level according to the Oxford Centre for Evidence-based Medicine levels of evidence (level 4).2 It is very likely that raters will be influenced by the appearance of the physicians, personal knowledge, or knowledge of previous experience or training; that is the 3 raters in this study might be more lenient after having spent a full day training the doctors that they assess and thereby overestimating the efficacy of the training program.
Second, EBUS-STAT is designed as a check-list which only allows “black-or-white” assessments of the complicated technical skills that constitute an EBUS procedure. Global rating scales with multiple grading opportunities are better suited to capture the finer nuances of performance.3 This could explain the finding by Scarlata and colleagues that they were unable to measure any effect of training in participants scoring more than 79 points in the pretest. It must be expected that all doctors with poor or no experience in EBUS-transbronchial needle aspiration will benefit from training and the lacking ability to measure improvement could be due to a ceiling effect directly associated with the EBUS-STAT instrument. It is a known weakness of check-lists that they only register if a subpart of a procedure is performed—not how well it is performed.
Finally, in medical education research it is always more interesting to compare 2 different training programs than to measure the efficacy of a single training program. A recent randomized controlled study used the endobronchial ultrasonography assessment tool (EBUSAT) to compare simulation-based training with traditional apprenticeship training; measuring posttraining performance on real patient procedures.4 Bias was avoided by having multiple, blinded raters assessing procedural competency based on video-recordings of the endoscopic and endosonographic views and the assessment tool (based on global rating scales) was sensitive enough to detect a significant difference favoring simulator training. EBUSAT assessment tool is now being implemented for formative and summative assessment of performance on both simulator and patients in the European Respiratory Society’s EBUS training and certification program.5
In conclusion, we hope that more clinicians will follow the example by Scarlata and colleagues and explore the efficacy of training. Preferably, these studies should compare different training programs, that is comparing the massed practice regimen in this study with the distributed learning program with shorter training sessions over multiple days that the authors rightfully propose. The highest level of evidence and sensibility can be achieved by using assessment tools that allow blinded and precise rating of competence.

Related Topics

    loading  Loading Related Articles