The Dependability of Medical Students’ Performance Ratings as Documented on In-Training Evaluations


    loading  Checking for direct PDF access through Ovid

Abstract

PurposeTo demonstrate an approach to obtain an unbiased estimate of the dependability of students’ performance ratings during training, when the data-collection design includes nesting of student in rater, unbalanced nest sizes, and dependent observations.MethodIn 2003, two variance components analyses of in-training evaluation (ITE) report data were conducted using urGENOVA software. In the first analysis, the dependability for the nested and unbalanced data-collection design was calculated. In the second analysis, an approach using multiple generalizability studies was used to obtain an unbiased estimate of the student variance component, resulting in an unbiased estimate of dependability.ResultsResults suggested that there is bias in estimates of the dependability of students’ performance on ITEs that are attributable to the data-collection design. When the bias was corrected, the results indicated that the dependability of ratings of student performance was almost zero.ConclusionThe combination of the multiple generalizability studies method and the use of specialized software provides an unbiased estimate of the dependability of ratings of student performance on ITE scores for data-collection designs that include nesting of student in rater, unbalanced nest sizes, and dependent observations.

    loading  Loading Related Articles