MESSAGE FROM THE EDITOR
Controlling confounding factors is important to ensuring the internal validity of experimental studies. Various design and statistical approaches such as using stringent inclusion and exclusion criteria, using placebo intervention, using repeated monitoring/measurements over a specific period, controlling for contamination, and deleting non-adherent or withdraw cases from the analysis of treatment effect may be used to control confounding factors. However, using control measures, which do not exist in normal care settings, may reduce the relevance of study findings to daily practice.
Gartlehner, Hansen, Nissman, Lohr, and Carey (2006) proposed a simple diagnostic tool with 7 study design criteria to distinguish effectiveness (pragmatic) from efficacy (explanatory) trials. These criteria are (1) population in primary care, (2) less stringent eligibility criteria, (3) health outcome, (4) long study duration/clinically relevant treatment modalities, (5) assessment of adverse events, (6) adequate sample size to assess a minimally important difference from a patient perspective, and (7) intention-to-treat analysis. These authors later proposed 6 cutoff criteria for pragmatic trials based on an analysis of sensitivity and specificity using 24 studies selected by experts.
However, very few studies are purely pragmatic or explanatory. The relationship between pragmatic and explanatory is a continuum rather than an all-or-none dichotomy. Thorpe and colleagues (2009) developed the Pragmatic-Explanatory Continuum Indicator Summary (PRECIS) to help trial designers assess the degree to which design decisions align with the stated purposes of a trial. The PRECIS includes 10 domains that distinguish pragmatic from explanatory trials: (1) The eligibility criteria for trials participants; (2) the flexibility with which the experimental intervention is applied; (3) the degree of practitioner expertise in applying and monitoring the experimental intervention; (4) the flexibility with which the comparison intervention is applied; (5) the degree of practitioner expertise in applying and monitoring the comparison intervention; (6) the intensity of follow-up of trial participants; (7) the nature of the trial’s primary outcome; (8) the intensity of measuring participants’ compliance with the prescribed intervention, and whether compliance-improving strategies are used; (9) the intensity of measuring practitioners’ adherence to the study protocol, and whether adherence-improving strategies are used; and (10) the specification and scope of the analysis of the primary outcome.
In this issue, Palese, Bevilacqua, and Dante apply PRECIS to evaluate the nature (pragmatic vs. explanatory) of 68 nursing RCTs. This study is the first to use PRECIS to evaluate RCTs conducted in a specific discipline. Each domain of the PRECIS is rated on a scale ranging from 1 (explanatory) to 5 (pragmatic), with a total score of 50 the highest degree for pragmatic and 10 the highest degree for explanatory. The mean total score for their evaluation was reported as 31.1, with Domain 2 the most explanatory (i.e., having the lowest score on flexibility in experimental intervention) and Domain 5 the most pragmatic (i.e., having the highest score on flexibility of practitioner expertise in applying and monitoring the experimental intervention).
Although I believe additional studies are necessary to support the conclusion of the authors, based on mean scores, that nursing RCTs are pragmatic, the findings of this study provide interesting new insights into the research design of nursing RCTs in terms of the explanatory–pragmatic continuum. Future studies should examine the relationship between the results of PRECIS and the application of RCT findings in practice.