Quality In-Training Evaluation Reports—Does Feedback Drive Faculty Performance?

    loading  Checking for direct PDF access through Ovid



Clinical faculty often complete in-training evaluation reports (ITERs) poorly. Faculty development (FD) strategies should address this problem. An FD workshop was shown to improve ITER quality, but few physicians attend traditional FD workshops. To reach more faculty, the authors developed an “at-home” FD program offering participants various types of feedback on their ITER quality based on the workshop content. Program impact is evaluated here.


Ninety-eight participants from four medical schools, all clinical supervisors, were recruited in 2009–2010; 37 participants completed the study. These were randomized into five groups: a control group and four other groups with different feedback conditions. ITER quality was assessed by two raters using a validated tool: the completed clinical evaluation report rating (CCERR). Participants were given feedback on their ITER quality based on group assignment. Six months later, participants submitted new ITERs. These ITERs were assessed using the CCERR, and feedback was sent to participants on the basis of their group assignment. This process was repeated two more times, ending in 2012.


CCERR scores from the participants in all feedback groups were collapsed (n=27) and compared with scores from the control group (n = 10). Mean CCERR scores significantly increased over time forthe feedback group but not the controlgroup.


The results suggest that faculty are able to improve ITER quality following a minimal “at-home” FD intervention. This also adds to the growing literature that has found success with improving the quality of trainee assessments following rater training.

Related Topics

    loading  Loading Related Articles