Here, we briefly mention a few approaches under development. The company Penelope has developed an automated tool that screens articles for errors and missing information. This tool may also be useful for checking adherence to reporting guidelines like Preferred Reporting Items for Systematic Reviews and Meta-analyses.2 The 2-tiered peer review approach in which the methods are reviewed and approved before results have even been submitted has been suggested and tested.3BMC Psychology is pilot testing a “results-free peer review” in which reviewers are blinded to results during initial review.4 There are also journals that publish negative results.5,6 In an extreme example, the journal Basic and Applied Social Psychology completely banned P values from studies.7 Actions like these are interesting; however, there is no evidence we know about that has evaluated the utility of these strategies.
In addition to measures that may limit publication bias, we think it should be addressed clearly in systematic reviews. First, we advocate for improved search strategies. Systematic literature searches that use only databases like MEDLINE or Embase will only return published studies, and evidence suggests that in some cases, including unpublished data alters the direction and magnitude of pooled effect estimates. Although there are many sources for searching gray literature, we find 2 especially promising. Clinical trials registries have been developed worldwide to catalog ongoing clinical trials. Section 801 of the Food and Drug Administration Amendments Act mandated that applicable clinical trials be registered on ClinicalTrials.gov site before patient enrollment and that summary data be reported within 1 year of study completion.8 Several studies have confirmed that clinical trial registries may be a useful adjunct to traditional database searches for locating unpublished data.9–14 Recently, the beta version of opentrials.net was released with the intent of making all data and documents related to a clinical trial publically available. These documents include Food and Drug Administration regulatory documents from Drugs@FDA, trial registry information, and trial protocols, to name a few.15
Second, we would like to suggest that systematic reviewers consider more robust methods for conducting publication bias assessments. In Hedin et al,16 visual inspection of funnel plots was reported commonly. This method inherently is subjective and not likely to produce accurate indications of publication bias. Terrin et al17 found that medical researchers were only able to correctly identify the presence or absence of publication bias in 52% of the funnel plots they inspected. Regression-based funnel plot approaches, also commonly found in our study, may lack sufficient power to detect true funnel plot asymmetry when meta-analyses are comprised an insufficient number of studies. More robust methods like selection models are promising alternatives and have been recommended recently.18 Unfortunately, these methods are not conducted as easily and will likely continue to be underrepresented in meta-analytic studies.
There are many proposed actions that could be put into place to improve the current state of affairs. We offer a few in hopes of encouraging ongoing discourse regarding this important topic in the anesthesia literature.