Discrete-choice experiments (DCEs), while becoming increasingly popular, have rarely been tested for validity and reliability.Objective
To address the issues of validity and reliability of willingness-to-accept (WTA) values obtained from DCEs. In particular, to examine whether differences in the attribute set describing a hypothetical product have an influence on preferences and willingness-to-pay (WTP) values of respondents.Methods
Two DCEs were designed, featuring hypothetical insurance contracts for Swiss healthcare. The contract attributes were pre-selected in expert sessions with representatives of the Swiss healthcare system, and their relevance was checked in a pre-test. Experiment A contained rather radical health system reform options, while experiment B concentrated on more familiar elements such as co-payment and the benefit catalogue. Three attributes were present in both experiments: delayed access to innovation (‘innovation’), restricted drug benefit (‘generics’), and the change in the monthly premium (‘premium’). The issue to be addressed was whether WTA values for the overlapping attributes were similar, even though they were embedded in widely differing choice sets.Two representative telephone surveys with 1000 people aged >25 years were conducted independently in the German and French parts of Switzerland during September 2003. Socioeconomic variables collected included age, sex, education, total household income, place of residence, occupation, and household size. Three models were estimated (a simple linear model, a model allowing interaction of the price attribute with socioeconomic characteristics, and a model with a full set of interaction terms).Results
The socioeconomic characteristics of the two samples were very similar. Theoretical validity tends to receive empirical support in both experiments in all cases where economic theory makes predictions concerning differences between socioeconomic groups. However, a systematic inappropriate influence on measured WTA seems to be present in at least one experiment. This is likely to be experiment A, in which respondents were far less familiar with proposed alternatives than in experiment B.Conclusions
Measuring preferences for major, little-known innovations in a reliable way seems to present particular challenges for experimental research.