Previous studies have demonstrated that programs emphasize United States Medical Licensing Examination scores, publications, and geography in creating rank lists. The authors aimed to quantify the importance of geography and to determine how eliminating geographic preferences would affect Match outcomes.Methods:
The Match algorithm was implemented and validated on 6 years of deidentified data from the San Francisco Match (2009 to 2014). A “consensus” ranking was generated for each year—all applicants were ordered into a single list using Markov chain rank aggregation. Each program’s rank list was reordered using the consensus list, and a new Match result was simulated. Statistical analysis was carried out with Microsoft Excel.Results:
Variation of program rank lists from the consensus rank list was driven by geography (training in the same medical center or state as the ranking program), “pedigree” (top 25 ranking of applicants’ prior training), and foreign medical graduation status. Step 1 scores, publications, and medical school or residency region were not factors. The simulated Match resulted in a slight increase in the match rate. The median normalized number needed to match decreased from 6.7 to 6.5, and 80 percent of applicants had an unchanged or better result compared to the actual Match.Conclusions:
Geography is the primary driver of variation between program rank lists. Removing this variation would result in fewer unfilled positions, no significant change in the average number needed to match, and improved Match outcomes for most applicants. Programs should critically evaluate whether their geographic biases reflect underlying information about applicant quality.