New Medical Student Performance Evaluation Standards: Laudable but Inadequate
In a recent study of U.S. medical schools,2 the percentage of students receiving the top grade in any clerkship ranged from 2% to 93%. Similar “extreme variability” is seen in student ranking categories, with the top category containing 3% to 39% of students at different institutions.3 Lower grade and ranking categories display even greater between-institution variability. Even with improved transparency, how can residency program directors appropriately compare students from different institutions whose grades and ranks represent drastically different performance percentiles? Most concerning, if no statistically valid way to compare such measures exists, what heuristics are programs using? In our interactions with students across diverse institutions, many have voiced these or similar concerns. Students from institutions with stringent grading distributions are especially vocal in their concern for inequity. Thus, while the MSPE recommendations may better expose underlying between-school variability, they ultimately fail to address the fundamental challenge: nonstandardization in grading and ranking.
As institutions move toward competency-based milestones and entrustable professional activities, we hope grading will be reevaluated entirely. In the meantime, we urge national standards for grading and ranking students that are rooted in statistically valid approaches for comparing candidates. Grading is not a new challenge; there are clearly no easy solutions. Yet, to ensure fairness in the residency application process and help program directors evaluate candidates, it is time to explore national standards.