In-training evaluations: developing an automated screening tool to measure report quality

    loading  Checking for direct PDF access through Ovid

Abstract

OBJECTIVES

In-training evaluation (ITE) is used to assess resident competencies in clinical settings. This assessment is documented on an evaluation report (In-Training Evaluation Report [ITER]). Unfortunately, the quality of these reports can be questionable. Therefore, training programmes to improve report quality are common. The Completed Clinical Evaluation Report Rating (CCERR) was developed to assess completed report quality and has been shown to do so in a reliable manner, thus enabling the evaluation of these programmes. The CCERR is a resource-intensive instrument, which may limit its use. The purpose of this study was to create a screening measure (Proxy-CCERR) that can predict the CCERR outcome in a less resource-intensive manner.

METHODS

Using multiple regression, the authors analysed a dataset of 269 ITERs to create a model that can predict the associated CCERR scores. The resulting predictive model was tested on the CCERR scores for an additional sample of 300 ITERs.

RESULTS

The quality of an ITER, as measured by the CCERR, can be predicted using a model involving only three variables (R2 = 0.61). The predictive variables included the total number of words in the comments, the variability of the ratings and the proportion of comment boxes completed on the form.

CONCLUSIONS

It is possible to model CCERR scores in a highly predictive manner. The predictive variables can be easily extracted in an automated process. Because this model is less resource-intensive than the CCERR, it makes it possible to provide feedback from ITER training programmes to large groups of supervisors and institutions, and even to create automated feedback systems using Proxy-CCERR scores.

Related Topics

    loading  Loading Related Articles