AbstractProblem Statement and Purpose.
The lack of direct observation by faculty may affect meaningful judgments of clinical competence. The purpose of this study was to explore the influence of direct observation on reliability and validity evidence for family medicine clerkship ratings of clinical performance.Method.
Preceptors rating family medicine clerks (n = 172) on a 16-item evaluation instrument noted the data-source for each rating: note review, case discussion, and/or direct observation. Mean data-source scores were computed and categorized as low, medium or high, with the high-score group including the most direct observation. Analyses examined the influence of data-source on interrater agreement and associations between clerkship clinical scores (CCS) and scores from the National Board of Medical Examiners (NBME®) subject examination as well as a fourth-year standardized patient-based clinical competence examination (M4CCE).Results.
Interrater reliability increased as a function of data-source; for the low, medium, and high groups, intraclass correlation coefficients were .29, .50, and .74, respectively. For the high-score group, there were significant positive correlations between CCS and NBME score (r = .311, p = .054); and between CCS and M4CCE (r = .423, p = .009).Conclusion.
Reliability and validity evidence for clinical competence is enhanced when more direct observation is included as a basis for clerkship ratings.