Re: Some Thoughts on Consequential Epidemiology and Causal Architecture

    loading  Checking for direct PDF access through Ovid

Excerpt

We would like to thank Professor Charles Poole for his views1 on our commentary2; we are always glad to see interest in our study. We would, however, like to take the opportunity to reply to some of Poole’s comments. Our read is that Poole’s commentary suggests some misunderstanding about the positions we outlined in our own comment. We highlight here our five main concerns.
First, we are not asking anyone to “abandon [their] interests in internal validity.” We agree that internal validity is vitally important to scientific inference, and emphasize this view here and in our original commentary. We refer readers who are skeptical of our position on internal validity to Chapter 12 of our textbook “Epidemiology Matters,”3 which outlines the stages of validity that have formed a foundation of modern scientific inquiry for many decades. Rather than suggesting that we abandon internal validity, instead we argue that too often the goal of estimating an internally valid effect becomes the research agenda itself, rather than the process through which we engage in important public health research.
Second, Poole suggests that we “hail” the ratio measure over the difference measure; we do not do so. We are in agreement with Poole that difference measures provide much more informative evidence regarding the magnitude of public health impact of exposures than ratio measures. Although this may be the epidemiologic and public health “mainstream,” most of the papers published in medical and epidemiologic journals continue to use ratio measures. Our comment was on the field as it is, not on as it should be. Furthermore, our argument about the limitations of risk-factor epidemiology is not germane to the merits of differences versus ratios; the same issues apply regardless of the measure. More specifically, in our commentary, we focus on the limitations of studying exposure–outcome relations agnostic to the underlying distributions of causes that vary within and across populations. Difference measures often highlight these underlying distributions as they are, rightfully so, more sensitive to base rates and co-occurring causes. Yet our contention remains that more expansive theorizing and interrogation about the ways in which these measures vary across populations would benefit the field of epidemiology.
Third, Poole asks “What’s wrong with restricting first and studying interaction later?” We respond with the question “What’s wrong with laying out a series of hypotheses about causal interactions first, and testing all of them simultaneously to build a series of testable, high-stakes hypotheses?” It is the latter question, and its difference with the question that Poole posed, that forms the foundation of what we mean by causal architecture. To answer Poole’s question more directly, nothing is wrong with restricting first and studying interaction later. However, we argue that the inherent questions that lead to methods that involve restriction are often not the pertinent questions to engage. Poole’s question on restriction leads to his comments that complex systems models rely on parameter estimates from the literature, and therefore internally valid risk ratios and risk differences from single exposures form the bedrock of the validity of the complex systems model. That is true, but experience from agent-based modeling approaches has taught us that what can be parameterized is based on summary ratio measures of single exposures because those are what is available in the literature, rather than a broad array of interactions, which would be much more informative for our models. If the causal architecture approach were adopted, rather than making our complex systems models untenable, as Poole suggests, our models would be more rigorous and flexible.
Fourth, Poole finds our claim that representative sampling enhances the assessment of interaction “dubious.
    loading  Loading Related Articles