The Use of Systematic Reviews When Designing and Reporting Surgical Trials
We previously investigated discrepancies between registry entries and final reports of randomized controlled trials (RCTs) published in 2010 in Annals of Surgery, JAMA Surgery, and the British Journal of Surgery.1 The aim of the current secondary analysis of these trials was to investigate the use of systematic reviews (SRs) to inform trial design and for overall evidence synthesis.
Clinical research projects typically target new research questions or the extension, confirmation, or rejection of previous findings. Therefore, a prerequisite is an extensive literature review, especially when planning clinical trials. Evidence from SRs and when applicable meta-analyses should be considered to inform clinical trial design.2 Moreover, when reporting trial results, these should be used to update previous SRs, as recommended for the “Discussion” section by the CONSORT group: “Ideally, we recommend a systematic review and indication of the potential limitation of the discussion if this cannot be completed.”3(p685)
The use of SRs to inform trial design is important from an ethical, scientific, and economic point of view. Some funding agencies ask for knowledge synthesis in grant applications.4
However, knowledge synthesis before conducting trials is not uniformly applied. An investigation of 48 RCTs funded by the National Institute for Health Research Health Technology Assessment between 2006 and 2008 found 77.1% of the trials referencing an SR; however, only 41.7% informed trial design.5 SRs were used to define the primary outcome and description of adverse events and for sample size calculation and determination of duration of follow-up. A repeated investigation of RCTs published in the May issue of Annals of Internal Medicine, BMJ, JAMA, Lancet, and The New England Journal of Medicine in 1997,6 2001,7 2005,8 2009,9 and 201210 evaluated whether the trial reports referred to the existing body of evidence in the “Discussion” section and, since 2005, whether SRs were used in the “Introduction” section. The results suggest that the proportion of trials synthesizing new findings with previous findings is low (39% in 2012 excluding trials that were the first ones addressing the question) without an apparent progress over the years.10 To the best of our knowledge, the use of SRs to inform the design and for overall evidence synthesis has not previously been investigated in surgical trials. The aim of the current research was to investigate to what extent information from SRs is used (1) to design trials and (2) to synthesize results, evaluating trials published in 2010 in 3 major surgical journals.1
All 2010 issues of Annals of Surgery, JAMA Surgery, and British Journal of Surgery were screened for RCTs and study characteristics were extracted as previously described.1 For the present investigation, 2 reviewers (R.R. and K.D.) independently extracted information concerning the use of SRs in the “Introduction” section (justification of the research) and the “Discussion” section (synthesis of results), adapted from extraction forms previously described in the literature.6–10 In addition, the “Methods” sections were screened for statements that the trial design had been informed by an SR, adapting the extracting scheme applied to evaluate the use of SRs to inform the study design in grant applications to the NIHR HTA.5 Discrepancies between the 2 reviewers were resolved by discussion. For data extraction, an Excel spreadsheet (Microsoft Office XP; Microsoft Corporation, Redmond, WA) was used. Descriptive statistics were conducted using Intercooled Stata (version 12.1; StataCorp LP, College Station, TX).
The flow of included studies and individual study details were published previously.1 In brief, of 596 studies identified through the search of the 3 journals, 51 RCTs fulfilled inclusion criteria.