Assessment of a Clinical Performance Evaluation Tool for Use in a Simulator-Based Testing Environment: A Pilot Study

    loading  Checking for direct PDF access through Ovid

Abstract

Purpose.

This study assessed a clinical performance evaluation tool for use in a simulator-based testing environment.

Method.

Twenty-three subjects were evaluated during five standardized encounters using a patient simulator (six emergency medicine students, seven house officers, ten chief resident-fellows). Performance in each 15-minute session was compared with performance on an identical number of oral objective-structured clinical examination (OSCE) sessions used as controls. Each was scored by a faculty rater using a scoring system previously validated for oral certification examinations in emergency medicine (eight skills rated 1–8; passing = 5.75).

Results.

On both simulator exams and oral controls, chief resident-fellows earned (mean) “passing” scores [sim = 6.4 (95% CI: 6.0–6.8), oral = 6.4 (95% CI: 6.1–6.7)]; house officers earned “borderline” scores [sim = 5.6 (95% CI: 5.2–5.9), oral = 5.5 (95% CI: 5.0–5.9)]; and students earned “failing” scores [sim = 4.3 (95% CI: 3.8–4.7), oral = 4.5 (95% CI: 3.8–5.1)]. There were significant differences among mean scores for the three cohorts, for both oral and simulator test arms (p < .01).

Conclusions.

In this pilot, a standardized oral OSCE scoring system performed equally well in a simulator-based testing environment.

Related Topics

    loading  Loading Related Articles