1 Royal Columbian Hospital, Department of Surgery, University of British Columbia, Vancouver, Canada2 Advocate Lutheran General Hospital, Division of Colon and Rectal Surgery, University of Illinois at Chicago, Chicago, Illinois
Checking for direct PDF access through Ovid
BACKGROUND:Apprenticeship in training new surgical skills is problematic, because it involves human subjects. To date there are limited inanimate trainers for rectal surgery.OBJECTIVE:The purpose of this article is to present manufacturing details accompanied by evidence of construct, face, and content validity for a robotic rectal dissection simulation.DESIGN:Residents versus experts were recruited and tested on performing simulated total mesorectal excision. Time for each dissection was recorded. Effectiveness of retraction to achieve adequate exposure was scored on a dichotomous yes-or-no scale. Number of critical errors was counted. Dissection quality was tested using a visual 7-point Likert scale. The times and scores were then compared to assess construct validity. Two scorer results were used to show interobserver agreement. A 5-point Likert scale questionnaire was administered to each participant inquiring about basic demographics, surgical experience, and opinion of the simulator. Survey data relevant to the determination of face validity (realism and ease of use) and content validity (appropriateness and usefulness) were then analyzed.SETTINGS:The study was conducted at a single teaching institution.SUBJECTS:Residents and trained surgeons were included.INTERVENTION:The study intervention included total mesorectal excision on an inanimate model.MAIN OUTCOME MEASURES:Metrics confirming or refuting that the model can distinguish between novices and experts were measured.RESULTS:A total of 19 residents and 9 experts were recruited. The residents versus experts comparison featured average completion times of 31.3 versus 10.3 minutes, percentage achieving adequate exposure of 5.3% versus 88.9%, number of errors of 31.9 versus 3.9, and dissection quality scores of 1.8 versus 5.2. Interobserver correlations of R = 0.977 or better confirmed interobserver agreement. Overall average scores were 4.2 of 5.0 for face validation and 4.5 of 5.0 for content validation.LIMITATIONS:The use of a da Vinci microblade instead of hook electrocautery was a study limitation.CONCLUSIONS:The pelvic model showed evidence of construct validity, because all of the measured performance indicators accurately differentiated the 2 groups studied. Furthermore, study participants provided evidence for the simulator’s face and content validity. These results justify proceeding to the next stage of validation, which consists of evaluating predictive and concurrent validity. See Video Abstract at http://links.lww.com/DCR/A551.