Author Response, “The Water-drinking Test and Glaucoma Progression: Considerations Regarding the Test Usefulness as an Independent Risk Assessment Tool”

    loading  Checking for direct PDF access through Ovid


In Reply:
We appreciate the comments by Prata and Dias on our paper.1 First, and foremost, we believe that their letter is an opportunity to educate and discuss general principles regarding study design. In addition, our responses highlight the importance of the water drinking test (WDT) as a tool for risk assessment in a real-world clinical setting.
With regard to the lack of statistical significance for other well-known risk factors for glaucoma progression [eg, central corneal thickness (CCT), age, baseline mean deviation (MD)], we would like to remind the authors that study design, variables included, how they are measured, endpoint determination, and, importantly, sample size, play a crucial role in what variables may come out significant. In survival analyses such as Cox proportional hazards—which we used in our study—the parameters needed for sample size calculation are effect size [log of the hazard ratio (HR) per unit of change], variance (or SD) of the independent variables, number of events over time, power (1-β) and α-levels. For example, CCT is a well-known risk factor that would require a large sample size to find significant results (assuming power 80% at α 5%). With a SD of 372 and an event rate of 32.6% (from our sample) the minimum sample size required to find a HR of 1.006/µm thinner (or 1.25/40 µm)3 would be at least 506 eyes in a univariable model alone (PASS software, version 14, NCSS). Of note, the number of events depends not only on the criteria used, but also length of follow-up. Had we used more conservative criteria of progression, this number would have been even larger. When looking at other risk factors (baseline MD and age), the minimum sample size would also increase substantially. In contrast, intraocular pressure (IOP) has been shown as a risk factor with strong effect size in a number of studies.3,4 On the basis of a HR of 1.135 and SD of 3.56 to 4.0,5 the minimum sample size would be between 101 and 132. Our study included 144 eyes. To highlight the importance of how variables are defined, had we dichotomized baseline MD based on a cut-off of −12 dB, the HR in the univariable analysis would be 1.90 (95% confidence interval, 1.02-3.53; P=0.04), suggesting that eyes with baseline MD worse than −12 dB had a 90% increased risk of progression than those with a better index. Of note, the WDT peaks remained significant even after adjusting for this new variable and including all other covariates (CCT, age, and medications) in the final model (HR, 1.13; 95% CI, 1.01-1.26; P=0.02). Regarding the nonsignificant effect of mean follow-up IOP in our study, we have extensively discussed the specific characteristics of our inclusion/exclusion criteria and how other prospective studies also failed to find such association, even with larger sample sizes (second to last paragraph of the Discussion section). The fact that we did not find a statistically significant effect does not suggest that these factors are not important in clinical practice—in fact, they do play a key role in risk assessment. We believe this point has been extensively addressed in the major randomized clinical trials. Finally, the authors stated that no other prospective longitudinal study found only 1 significant predictor for glaucoma progression. This statement is not only irrelevant but also inaccurate. For instance, Yu et al7 found that only retinal nerve fiber layer thinning measured with optical coherence tomography was significantly associated with visual field progression (notably, there was no significant effect of CCT, IOP, or baseline MD).
    loading  Loading Related Articles