The Effect of Item Screeners on the Quality of Patient Survey Data: A Randomized Experiment of Ambulatory Care Experience Measures

    loading  Checking for direct PDF access through Ovid

Abstract

Background

The use of item screeners is viewed as an essential feature of quality survey design because only respondents who are ‘qualified’ to answer questions that apply to a subset of the sample are directed to answer. However, empirical evidence supporting this view is scant.

Objective

This study compares data quality resulting from the administration of ambulatory care experience measures that use item screeners versus tailored ‘not applicable’ options in response scales.

Methods

Patients from the practices of 367 primary care physicians in 65 medical groups were randomly assigned to receive one of two versions of a well validated ambulatory care experience survey. Respondents (n = 2240) represent random samples of active established patients from participating physicians' panels.

Methods

The ‘screener’ survey version included item screeners for five test items and the ‘no screener’ version included tailored ‘not applicable’ options in response scales instead of using screeners.

Methods

The main outcomes measures were data quality resulting from the two item versions, including the mean item scores, the level of missing values, outgoing patient sample sizes needed to achieve adequate medical group-level reliability, and the relative ranking of medical groups.

Results

Mean survey item scores generally did not differ by version. There were consistently fewer respondents to the ‘screener’ versions than ‘no screener’ versions. However, because the ‘screener’ versions improved measurement precision, smaller outgoing patient samples were needed to achieve adequate medical group-level reliability for four of the five items than for the ‘no screener’ version. The relative ranking of medical groups did not differ by item version.

Conclusion

Screeners appear to reduce noise by ensuring that respondents who are not ‘qualified’ to answer a question are screened out instead of providing unreliable responses. The increased precision resulting from ‘screener’ versions appears to more than offset the higher item non-response rates compared with ‘no screener’ versions.

Related Topics

    loading  Loading Related Articles