Mean Deviation of Inter-rater Scoring (MDIS): a simple tool for introducing conformity into groups of clinical investigators

    loading  Checking for direct PDF access through Ovid

Abstract

In spite of considerable progress over the past decade, training investigators for inter-rater reliability for clinical trials remains a major problem. The aim of the present study was to promote a new tool to increase data homogeneity by introducing conformity into groups of clinical investigators. The investigators' scoring grid we are proposing–the Mean Deviation of Inter-rater Scoring (MDIS)–involves the calculation of the score deviation for each investigator relative to the median score of an expert group who had evaluated the same videotape-recorded clinical case. Whatever the scale, the score deviation is calculated as the absolute deviation value from the median score obtained by the experts for each item. The MDIS value is then evaluated from all the scores given by an investigator by dividing the total sum of the previously defined values by the number of items of the scale. Some examples from practice are given using several rating scales: (i) Hamilton Anxiety Rating Scale; (ii) Hamilton Depression Rating Scale; (iii) Montgomery Åsberg Depression Rating Scale; and (iv) Positive And Negative Symptoms Scale. Finally, such a method could also be employed by experts to evaluate the quality of videotape-recorded clinical cases used in clinical trials, as well as by teachers to evaluate initial or continuous medical training.

Related Topics

    loading  Loading Related Articles