Examinee Performance on Computer-based Case Simulations as Part of the USMLE Step 3 Examination: Are Examinees Ordering Dangerous Actions?

    loading  Checking for direct PDF access through Ovid

Excerpt

Computer-based case simulations (CCSs) have been used in addition to fixed-format items in USMLE Step 3 since November 1999. The ability to assess physicians' patient-management skills with simulation has added a new dimension to the physician licensure assessment. Each patient management case begins with an opening scenario describing the patient's location and presentation. Using free-text entry, the examinee orders tests, treatments, and consultations and selects physical examinations from a list of options while advancing the case through simulated time. Within the dynamic simulation framework, the patient's condition changes based on both the actions taken by the examinee and the underlying problem. More extensive detail regarding the CCS format1 and scoring2,3 may be found elsewhere.
The CCS free-text entry format and availability of over 2,500 unique clinical orders make it possible to assess the extent to which examinees make dangerous management errors. Although this potential has been touted as a significant advantage of the CCS format,1 no study has examined this behavior in the CCS context. Previous research4,5,6,7 on the selection of dangerous actions has, however, been conducted using written medical certification examinations. A driving force behind these studies was to provide insight into whether examinees' scores should be based, in part, on their propensity to select dangerous answers. The findings did not support such action. One study4 found dangerous actions to be so highly correlated with total test score that accounting for these actions would not provide additional information regarding examinee proficiency. In a second study,6 examinee performance, as measured by an oral examination and report by a clinical competency committee, indicated that selecting a disproportionately high number of dangerous answers on the written examination did not predict dangerous clinical thinking or behavior.
These studies have provided useful information regarding the extents to which different groups of candidates select dangerous actions. However, one limitation of examining this behavior in written examinations is that fixed-format items constrain the list of answer options. In contrast, examinees construct their own responses to CCS cases, which makes the potential to measure dangerous actions essentially unlimited.
The purpose of this research was to focus on dangerous interventions ordered by examinees while managing CCS cases. Specifically, it was of interest (1) to quantify the extent to which licensure candidates managing CCS cases order dangerous nonindicated interventions; (2) to understand how these types of examinee behavior relate to other measures produced in CCS scoring, and (3) to understand the nature of dangerous interventions ordered and their relationships to case content.
    loading  Loading Related Articles