Use of External Controls in Regulatory Decision‐Making
In the article by Eichler and colleagues, “Threshold Crossing” is a new name applied to an epidemiological design using real‐world evidence to develop the threshold (“counterfactual”). This article provides a comprehensive review of epidemiological techniques for evidence generation as an alternative to the randomized clinical trial (RCT). As pointed out, randomization remains the key to establishing causal inference by balancing both known and unknown confounders at baseline. Nevertheless, randomization is not perfect and postrandomization events, such as loss‐to‐follow‐up, use of concomitant therapies, and receipt of subsequent care may interfere with the interpretation of trial results. Moreover, it is not feasible to perform RCTs to answer all of the remaining questions regarding a drug's use after marketing approval.
The concepts introduced by Eichler and colleagues are not new to medical product regulators. Threshold crossing is just a new name for the standard practice of using external controls, described in regulation (21 CFR 314.126) and guidance,5 for both single‐arm trials and noninferioirty trials. Noninferiority trials are relevant because they are precisely threshold‐crossing trials, comparing the performance of a new treatment to the historical performance of a placebo control, using as a covariate the performance of an active comparator. Many of the issues discussed in the article for threshold‐crossing trials have already arisen in noninferiority trials: the question of what sort of historical data is needed to make a threshold‐crossing trial feasible, the desirability and difficulty of prespecifying a threshold, assay sensitivity, and the constancy assumption. While assay sensitivity and the constancy assumption cannot be proven in the absence of a placebo control in a noninferiority trial, the kind of trial being proposed also lacks inclusion of the covariate of a concurrent active comparator. Eichler and colleagues focus on the problem of cherry‐picking historical data in order to address these issues, but ignore the issue of Type I error.