Inter-rater reliability of two paediatric early warning score tools

    loading  Checking for direct PDF access through Ovid

Abstract

Background

Paediatric early warning score (PEWS) assessment tools can assist healthcare providers in the timely detection and recognition of subtle patient condition changes signalling clinical deterioration. However, PEWS tools instrument data are only as reliable and accurate as the caregivers who obtain and document the parameters.

Objective

The aim of this study is to evaluate inter-rater reliability among nurses using PEWS systems.

Design

The study was carried out in five paediatrics departments in the Central Denmark Region. Inter-rater reliability was investigated through parallel observations. A total of 108 children and 69 nurses participated. Two nurses simultaneously performed a PEWS assessment on the same patient. Before the assessment, the two participating nurses drew lots to decide who would be the active observer. Intraclass correlation coefficient, Fleiss’ κ and Bland–Altman limits of agreement were used to determine inter-rater reliability.

Results

The intraclass correlation coefficients for the aggregated PEWS score of the two PEWS models were 0.98 and 0.95, respectively. The κ value on the individual PEWS measurements ranged from 0.70 to 1.0, indicating good to very good agreement. The nurses assigned the exact same aggregated score for both PEWS models in 76% of the cases. In 98% of the PEWS assessments, the aggregated PEWS scores assigned by the nurses were equal to or below 1 point in both models.

Conclusion

The study showed good to very good inter-rater reliability in the two PEWS models used in the Central Denmark Region.

Related Topics

    loading  Loading Related Articles