Using Patient-reported Outcome Measures to Improve Health Care: Time for a New Approach
There is increasing interest in embedding PROMs within health information systems to compare health care providers and it has been claimed that this has the potential to transform health care into a more patient-centered model.1–3 PROMs have inherent advantages over alternative performance measures such as mortality and clinician-defined morbidity. They tap into the unique and authoritative insight of patients about their own health, avoid the conflict of interest that occurs when health care providers rate their own performance, and directly measure the extent to which the main objective of most interventions has been achieved: improving the patient’s health and quality of life. The routine collection of PROMs has already been implemented in many countries,2 and it is clear they should have a role in health care intelligence. But what is that role? We contend that previous attempts to use routinely collected PROMs have disappointed and propose a shift in purpose. We focus on the use of routinely collected PROMs to compare health care providers and do not question the value of PROMs within comparative effectiveness research. We also do question the use of patient-reported experience measures, sometimes referred to as “PREMS,” to compare providers.
When PROMs are embedded within health information systems they are predominantly used to compare the performance of individual service providers,4 networks of service providers,5 or insurance providers.6 There are 2 mechanisms by which these comparisons are thought to improve health care. First, “value-based purchasing,” where PROMs are used to facilitate purchaser choices and encourage competition.7 Second, “audit and feedback” where PROMs are used to uncover flaws in care processes and competencies.8
In the United States PROMs have been used as value-based purchasing tools within the “star ratings” of Medicare Advantage insurance plans. These influence reimbursement, physician bonuses, and the ability of insurance plans to expand.6 In England PROMs have been used to compare NHS Trust performance for 4 common surgical procedures since 2011 and are also linked to financial rewards.4 The evidence base supporting these initiatives is weak and there has been strong resistance from experts in performance assessment to comparing organizations with PROMs.9 For example, an evaluation of the English PROMs Program has found no impact on patient outcomes10 and the whole Program is now under review.4 PROMs were not originally developed to compare providers and they lack many of the attributes desirable in a performance indicator. Variation in PROMs is heavily influenced by patient-level variables because health perceptions are influenced by factors other than the health care provider where treatment was received. Typically, <10% of the variation in PROMs is at the provider-level,10 a threshold often considered the minimum requirement for a performance indicator.11 High patient-level heterogeneity reduces the precision of provider estimates, which in turn increases the sample sizes and time needed to detect clinically meaningful variation.11 There is evidence in some fields such as bariatric surgery that slightly >10% of variation in PROMs can be explained at the provider-level but even here the amount of variation that can be explained by patient-level factors is much higher.12 An important source of patient-level variation is the type of treatment received.