The Next Generation of Drug Safety Science: Coupling Detection, Corroboration, and Validation to Discover Novel Drug Effects and Drug–Drug Interactions
Detection of subpopulation‐specific adverse reactions is statistically challenging. Clinical trials do not have the power to study DDIs or to evaluate specific subgroups. On the other hand, retrospective studies of real‐world clinical data have the diversity and the size to detect these patterns, but they also have unavoidable biases and systematic errors1 that produce dubious discoveries (Box 1). A new approach to drug safety research that takes the best aspects of both methods is needed—combining the validity of a prospective trial with the opportunity for discovery provided by observational data.
I propose the use of a three‐step methodology (Figure1) to discover and validate novel adverse drug reactions and DDIs. The first step, detection, is to data‐mine a large observational resource, like the US Food and Drug Administration's (FDA's) Adverse Event Reporting System (FAERS), for unexpected associations between drugs and adverse events. This procedure will produce thousands of significant hypotheses and, as is true in any data‐mining experiment, a large proportion of those are expected to be false discoveries. There are many signal detection and statistical data‐mining algorithms to choose from when detecting new drug–adverse event hypotheses.2 The precise choice of method will depend on the adverse event outcome of interest and the balance desired between false positives and novel discoveries.
The second step, corroboration, is to integrate an independent resource (e.g., chemical network biology data or electronic health records, EHRs) to prioritize the data‐mined hypotheses based on their plausibility. A drug that can be molecularly connected to the adverse event is more likely to be true than one that cannot. Chemical informatics methods and systems pharmacology models that use drug binding and pathway data can be used to find these molecular connections.3 Likewise, a drug effect that can be independently replicated in EHRs is more likely than one that cannot. This step is also performed on already collected data using computational methods and therefore can be applied to all of the mined hypotheses from the first step. This will reduce the number of significant hypotheses from thousands to hundreds or tens. The limitation here is that these models are specific to the adverse reaction being studied. For example, the same molecular model—or EHR phenotype—used for studying arrhythmias will likely not apply to glucose metabolism.
In the final step, validation, a model system is identified to experimentally test the strongest hypotheses. This is the most challenging step, as many adverse reactions do not map clearly to experimental systems. It is also the most important, since a hypothesis is only as good as it is falsifiable. Ideally, the experiment will be efficient, with a straightforward interpretation, like a protein affinity assay. Often, however, a more complex experiment in a cellular or animal model is needed. In either case, this is both an efficient and ethical method for validating adverse drug reactions compared to launching a prospective clinical trial.