We provide an overview of the relative merits of ratio measures (relative risks, risk ratios, and rate ratios) compared with difference measures (risk and rate differences). We discuss evidence that the multiplicative model often fits the data well, so that rarely are interactions with other risk factors for the outcome observed when one uses a logistic, relative risk, or Cox regression model to estimate the intervention effect.
As a consequence, additive models, which estimate the risk or rate difference, will often exhibit interactions. Under these circumstances, absolute measures of effect, such as years of life lost, disability- or quality-adjusted years of life lost, and number needed to treat, will not be externally generalizable to populations other than those with similar risk factor distributions as the population in which the intervention effect was estimated. Nevertheless, these absolute measures are often of the greatest importance in public health decision-making.
When studies of high-risk study populations are used to more efficiently estimate effects, these populations will not be representative of the general population's risk factor distribution. The relative homogeneity of ratio versus absolute measures will thus have important implications for the generalizability of results across populations.