A study of Gleason score interpretation in different groups of UK pathologists; techniques for improving reproducibility

    loading  Checking for direct PDF access through Ovid

Abstract

Aims

To test the effectiveness of a teaching resource (a decision tree with diagnostic criteria based on published literature) in improving the proficiency of Gleason grading of prostatic cancer by general pathologists.

Methods

A decision tree with diagnostic criteria was developed by a panel of urological pathologists during a reproducibility study. Twenty-four general histopathologists tested this teaching resource. Twenty slides were selected to include a range of Gleason score groups 2–4, 5–6, 7 and 8–10. Interobserver agreement was studied before and after a presentation of the decision tree and criteria. The results were compared with those of the panel of urological pathologists.

Results

Before the teaching session, 83% of readings agreed within ± 1 of the panel's consensus scores. Interobserver agreement was low (κ = 0.33) compared with that for the panel (κ = 0.62). After the presentation, 90% of readings agreed within ± 1 of the panel's consensus scores and interobserver agreement amongst the pathologists increased to κ = 0.41. Most improvement in agreement was seen for the Gleason score group 5–6.

Conclusions

The lower level of agreement among general pathologists highlights the need to improve observer reproducibility. Improvement associated with a single training session is likely to be limited. Additional strategies include external quality assurance and second opinion within cancer networks.

Related Topics

    loading  Loading Related Articles