Recent studies have evaluated cumulative human immunodeficiency virus type 1 (HIV-1) viral load (cVL) for predicting disease outcomes, with discrepant results. We reviewed the disparate methodological approaches taken and evaluated the prognostic utility of cVL in a resource-limited setting. Using data on the Infectious Diseases Institute (Makerere University, Kampala, Uganda) cohort, who initiated antiretroviral therapy in 2004–2005 and were followed up for 9 years, we calculated patients' time-updated cVL by summing the area under their viral load curves on either a linear scale (cVL1) or a logarithmic scale (cVL2). Using Cox proportional hazards models, we evaluated both metrics as predictors of incident opportunistic infections and mortality. Among 489 patients analyzed, neither cVL measure was a statistically significant predictor of opportunistic infection risk. In contrast, cVL2 (but not cVL1) was a statistically significant predictor of mortality, with each log10 increase corresponding to a 1.63-fold (95% confidence interval: 1.02, 2.60) elevation in mortality risk when cVL2 was accumulated from baseline. However, whether cVL is predictive or not hinges on difficult choices surrounding the cVL metric and statistical model employed. Previous studies may have suffered from confounding bias due to their focus on cVL1, which strongly correlates with other variables. Further methodological development is needed to illuminate whether the inconsistent predictive utility of cVL arises from causal relationships or from statistical artifacts.