Correspondence Between Results and Aims of Funding Support in EPIDEMIOLOGY Articles

    loading  Checking for direct PDF access through Ovid

Excerpt

The credibility of epidemiologic research and its relation to prespecification of hypotheses and approaches to data analysis have been addressed on the pages of EPIDEMIOLOGY since its first issue.1–3 Recently, the lay and scientific media have raised concerns about the replicability of scientific research.4–6 Selective reporting of findings based on “hypothesizing after results are known,” or HARKing,7 and results-driven data analysis,8 or p-hacking, may lead to incorrect or exaggerated scientific results that cannot be later replicated.9,10 Selective reporting of findings has been empirically demonstrated. A landmark study of randomized trials found that over half of the outcomes were incompletely reported, that “statistically significant” outcomes were more likely to be reported, and that over 60% of trials had at least one primary outcome that differed from the ones listed in the study protocol.11 Soon thereafter, medical journals began to require preregistration of clinical trials.12,13
One consequence of the implementation of trial registration has been a parallel call for compulsory preregistration of nonrandomized epidemiologic research,14–16 which proponents argue would allow comparison of published results with preregistered objectives and protocols, including the study hypothesis. Although there are important logical and philosophical reasons to question the preeminence of a priori hypotheses over a posteriori hypotheses,17–20 many scientists assign greater credibility to results that correspond to the former.9,21 For reasons explained elsewhere, the editors of EPIDEMIOLOGY have resisted calls for compulsory preregistration22 and other regimentation of epidemiologic research.23,24 Nonetheless, because conversations about selective reporting in observational studies have been hampered by the sparse empirical data, we undertook a self-study to compare the correspondence between published results in our journal and the prespecified objectives in the funding mechanisms that authors said had supported the work leading to the publication.
For every original research article and brief report published by EPIDEMIOLOGY in 2013, 2014, and 2015, we extracted the publication’s abstract and its information on the sources of funding as provided by the authors. We attempted to locate Internet databases for each funding source and downloaded summary information submitted by the authors to the funding source at the time of their application for funding support (e.g., the specific aims, project objectives, or other description of anticipated work). One of us (T.L.) compared the abstract with all available descriptions of objectives and categorized the results in the abstract into one of five categories: (1) the published result was clearly among the funded aims; (2) the published result was possibly among the funded aims; (3) there is no evidence that the published result was among the funded aims; (4) the funding information was inconclusive; or (5) no funding information was available. The category “the funding information was inconclusive” most often applied to articles for which no Internet database was located for the funding source, the funding source information was unavailable in English and attempts at Internet-enabled translation failed, or the funding source information did not include a description of the project’s objectives that was submitted at the time of application for funding support. The category “no funding information available” applied to articles for which the authors listed no source of funding support. As a secondary evaluation, we repeated the analysis with restriction to nonmethods articles. As a validation substudy, four editors reviewed the information for 10 publications each, selected at random and without replacement. Finally, for each publication that reported a ratio estimate of association, we extracted the first ratio estimate and its 95% confidence interval from the abstract. We extracted the first ratio estimate with the expectation that it would represent the main result and with the intention to select a result systematically rather than preferentially.
    loading  Loading Related Articles