Use of External Controls in Regulatory Decision‐Making

    loading  Checking for direct PDF access through Ovid

Excerpt

Evidence generation serves as the underpinning for all healthcare‐related decision‐making. There has been a recent trend to romanticize real‐world evidence and dichotomize it from the clinical trial enterprise. In contrast, the US Food and Drug Administration (FDA) views evidence generation as a multidimensional process based on data source, study design, degree of pragmatism, and context. The data source may be an existing source of “real‐world” data, such as electronic health records or administrative claims data, or newly collected data (e.g., through the use of case report forms in clinical trials). Trial designs may be interventional or noninterventional (observational), internally or externally controlled, and randomized or nonrandomized. The degree of pragmatism reflects how closely the design and conduct of the trial reflect the clinical practice to which the results will be applied.1 The context includes factors such as the seriousness and prevalence of the disease being studied, availability of alternative therapies, and whether the study objective is to inform regulatory action, evaluate comparative effectiveness, or generate hypotheses. While it is preferable to have high‐quality evidence for all healthcare decision‐making, this is often not practical or feasible. Oftentimes we settle for lower‐quality evidence, rather than make decisions based on no evidence whatsoever. For instance, determining which tier to place a drug on a formulary or generating hypotheses in academic research may not require substantial evidence. Even within the regulatory context, we distinguish between efficacy and safety evaluation. With few exceptions, the assessment of safety of a medical product is not studied with the same rigor as efficacy (e.g., lack of prespecified hypothesis testing or lack of adequate power to detect small signals). Most important, these dimensions are not mutually exclusive. For instance, randomized trials can be performed within the healthcare system using real‐world data.2
In the article by Eichler and colleagues, “Threshold Crossing” is a new name applied to an epidemiological design using real‐world evidence to develop the threshold (“counterfactual”). This article provides a comprehensive review of epidemiological techniques for evidence generation as an alternative to the randomized clinical trial (RCT). As pointed out, randomization remains the key to establishing causal inference by balancing both known and unknown confounders at baseline. Nevertheless, randomization is not perfect and postrandomization events, such as loss‐to‐follow‐up, use of concomitant therapies, and receipt of subsequent care may interfere with the interpretation of trial results. Moreover, it is not feasible to perform RCTs to answer all of the remaining questions regarding a drug's use after marketing approval.
The concepts introduced by Eichler and colleagues are not new to medical product regulators. Threshold crossing is just a new name for the standard practice of using external controls, described in regulation (21 CFR 314.126) and guidance,5 for both single‐arm trials and noninferioirty trials. Noninferiority trials are relevant because they are precisely threshold‐crossing trials, comparing the performance of a new treatment to the historical performance of a placebo control, using as a covariate the performance of an active comparator. Many of the issues discussed in the article for threshold‐crossing trials have already arisen in noninferiority trials: the question of what sort of historical data is needed to make a threshold‐crossing trial feasible, the desirability and difficulty of prespecifying a threshold, assay sensitivity, and the constancy assumption. While assay sensitivity and the constancy assumption cannot be proven in the absence of a placebo control in a noninferiority trial, the kind of trial being proposed also lacks inclusion of the covariate of a concurrent active comparator. Eichler and colleagues focus on the problem of cherry‐picking historical data in order to address these issues, but ignore the issue of Type I error.
    loading  Loading Related Articles