The Author Responds

    loading  Checking for direct PDF access through Ovid


Many thanks to Keyes and Galea1 for their letter clarifying some points in their essay on causal architecture and consequential epidemiology.2 As I had hoped,3 several of my misgivings seem to have been misapprehensions. Our agreements now appear to outweigh our disagreements. It is heartening to learn that the authors are not calling for epidemiologists to lessen our concerns with internal validity, to value representative samples for their purported ability to enhance interaction analyses, or even to move away from the study of proximal effects of individual risk factors. We are in accord, after all, that proximal risk-factor effects are the bricks from which causal architectures are constructed.
It is especially encouraging to learn of the authors’ assent1 to the view that public health consequences are best measured as absolute effects, and not as ratios or proportions.3 It has long puzzled me why so many of my epidemiology colleagues report and compare ratio effect measures exclusively, even while taking great pride in the intimate connection of our research to public health. It is usually left to the reader who cares about population-level consequences to convert the ratios into differences, at least approximately, by the mental arithmetic I like to call “reading the relative risks with our risk-difference glasses on.” Sandro Galea is ideally positioned to encourage researchers to do that math for us in his monthly round-up of selected articles in a leading public health journal.4 Although I have no systematic review to back up my impression that absolute effects are estimated there more often than in most medical and epidemiology journals, it is not hard to find praiseworthy exemplars in recent issues.5–9
Keyes and Galea1 ask if I see anything wrong with studying effect-measure modification. I do not, once we are reasonably sure there is an effect to be modified, as I share the widespread reluctance in our field to delve too deeply into interactions when the universal null hypothesis (no effect in anyone under any circumstances) is still on the table. When that hypothesis has been substantively (as opposed to statistically) rejected, I support the authors’ call for greatly increased attention to interactions. I would urge only a framing of the research aims in terms of estimating meaningful measures of effect-measure modification, rather than in the usual terms of null hypothesis testing. Estimation is measurement. Hypothesis testing is decision-making. I firmly believe we learn more of greater value by measuring than by deciding.
Social epidemiology has been defined as the study of the effects of societal conditions on health and the mechanisms by which these effects are produced.10 I emphasize the conjunctive “and” because I consider both lines of research vital to the enterprise. Hence, I did not call “black box” studies (as Susser and Susser11 labeled them with derisive intent) linking upstream societal conditions to downstream health outcomes without parsing the many and complex paths along which those effects are transmitted “the bedrock of social epidemiology.”1 I merely noted something much less extreme, which I thought was unarguable: that “much”3 of value in social epidemiology has been contributed by this black box research. I am delighted with Keyes and Galea’s1 agreement that black box studies are valuable, even while it remains unclear to me exactly how those studies would contribute to complex systems models, which are assembled by piecing together proximal effects.
The authors appear to take me to task for providing just one example of valuable black box studies in social epidemiology. That example was based on the one and only example in a paper promoting complex systems modeling.
    loading  Loading Related Articles