Excerpt
The data they cite on differences in program performance are illustrative. Although the traditional performance indicators (e.g., contact index) of the South Carolina program are superior to those in our study, it is not clear that this translates into greater effectiveness in reducing disease transmission. A study from our county found that early in our epidemic, patients with syphilis had an average of 6.3 partners during the infectious period.2 Applying our study's contact index (1.75) and brought-to-treatment index (0.47) would result in evaluation of an average of only 0.8 of 6.3 partners. Using South Carolina's indices (2.07 and 0.76) would result in the evaluation of 1.6 of 6.3 partners. With these traditional measures, South Carolina's program performance is better. However, it is clear that both programs fail to identify or intervene with the majority of sexual contacts. Further, no data are presented to show that “stronger” performance results in decreased transmission or disease prevalence in the community.
We agree with Gibson and Lindman that STD programs perform in different epidemiologic, operational, and economic environments. Effectiveness and cost effectiveness can vary as a result of these environmental differences. Our study was based on the realities of our environment; other communities face different realities that can influence program design, effectiveness, and cost effectiveness.
For example, Gibson and Lindman cite a higher charge for of serologic testing in South Carolina than the cost in our study. There are real differences in costs among programs. If testing is more expensive in South Carolina, the relative cost effectiveness of testing and partner notification may be different. In any case, this is an issue that should be decided by rigorous analysis, not simply by citing laboratory charges.
Similarly, our study's inclusion of tests on symptomatic volunteers as well as asymptomatic persons arose from the operational reality of our program. Given the magnitude and nature of our syphilis epidemic and the pitfalls inherent using symptoms as a basis for offering testing, we felt we had no choice but to test both symptomatic and asymptomatic persons in the high-risk populations our program served. Gibson and Lindman state “[t]he appropriate comparison…would be between DIS contact intervention and true screening of asymptomatic persons.” If that is the way the South Carolina program operates, their proposed basis for comparison may be appropriate. However, it is not one that reflects the operations or the ethical imperatives of the situation we faced, and it is not the appropriate basis for comparison in our study.
Finally, Gibson and Lindman express concern that testing is less likely than partner notification to identify and treat “core transmitters.” They offer the observation that in South Carolina, patients with syphilis identified through partner notification were more likely than volunteers or screening patients to name two or more sexual partners. Although the data suggest that patients found through partner notification have more sexual partners, it provides no evidence that these patients are core transmitters. Naming two or more sexual partners is not an appropriate criterion to define core transmitters.