New Medical Student Performance Evaluation Standards: Laudable but Inadequate

    loading  Checking for direct PDF access through Ovid

Excerpt

The Association of American Medical Colleges’ recent Medical Student Performance Evaluation (MSPE) recommendations1 propose a revamped MSPE that organizes evaluative performance data in a clear and concise format, with standardized reporting across institutions. Inclusion of grading and ranking distributions, and clear articulation of the assessments that make up such measures, enhances transparency. Moreover, we applaud the inclusion of Accreditation Council for Graduate Medical Education core competency performance data, as well as the exclusion of United States Medical Licensing Examination scores, which affords candidates discretion in releasing scores. In sum, there is much to like in the new MSPE recommendations. Unfortunately, the recommendations do not address the grading and ranking variability they lay bare.
In a recent study of U.S. medical schools,2 the percentage of students receiving the top grade in any clerkship ranged from 2% to 93%. Similar “extreme variability” is seen in student ranking categories, with the top category containing 3% to 39% of students at different institutions.3 Lower grade and ranking categories display even greater between-institution variability. Even with improved transparency, how can residency program directors appropriately compare students from different institutions whose grades and ranks represent drastically different performance percentiles? Most concerning, if no statistically valid way to compare such measures exists, what heuristics are programs using? In our interactions with students across diverse institutions, many have voiced these or similar concerns. Students from institutions with stringent grading distributions are especially vocal in their concern for inequity. Thus, while the MSPE recommendations may better expose underlying between-school variability, they ultimately fail to address the fundamental challenge: nonstandardization in grading and ranking.
As institutions move toward competency-based milestones and entrustable professional activities, we hope grading will be reevaluated entirely. In the meantime, we urge national standards for grading and ranking students that are rooted in statistically valid approaches for comparing candidates. Grading is not a new challenge; there are clearly no easy solutions. Yet, to ensure fairness in the residency application process and help program directors evaluate candidates, it is time to explore national standards.
    loading  Loading Related Articles