Concurrent Validity and Sensitivity to Change of Direct Behavior Rating Single-Item Scales (DBR-SIS) Within an Elementary Sample
The purpose of this study was to evaluate the concurrent validity, sensitivity to change, and teacher acceptability of Direct Behavior Rating single-item scales (DBR-SIS), a brief progress monitoring measure designed to assess student behavioral change in response to intervention. Twenty-four elementary teacher–student dyads implemented a daily report card intervention to promote positive student behavior during prespecified classroom activities. During both baseline and intervention, teachers completed DBR-SIS ratings of 2 target behaviors (i.e., Academic Engagement, Disruptive Behavior) whereas research assistants collected systematic direct observation (SDO) data in relation to the same behaviors. Five change metrics (i.e., absolute change, percent of change from baseline, improvement rate difference, Tau-U, and standardized mean difference; Gresham, 2005) were calculated for both DBR-SIS and SDO data, yielding estimates of the change in student behavior in response to intervention. Mean DBR-SIS scores were predominantly moderately to highly correlated with SDO data within both baseline and intervention, demonstrating evidence of the former’s concurrent validity. DBR-SIS change metrics were also significantly correlated with SDO change metrics for both Disruptive Behavior and Academic Engagement, yielding evidence of the former’s sensitivity to change. In addition, teacher Usage Rating Profile-Assessment (URP-A) ratings indicated they found DBR-SIS to be acceptable and usable. Implications for practice, study limitations, and areas of future research are discussed.