Difficulty in detecting discrepancies in a clinical trial report: 260-reader evaluation

    loading  Checking for direct PDF access through Ovid

Abstract

Background: Scientific literature can contain errors. Discrepancies, defined as two or more statements or results that cannot both be true, may be a signal of problems with a trial report. In this study, we report how many discrepancies are detected by a large panel of readers examining a trial report containing a large number of discrepancies.

Methods: We approached a convenience sample of 343 journal readers in seven countries, and invited them in person to participate in a study. They were asked to examine the tables and figures of one published article for discrepancies. 260 participants agreed, ranging from medical students to professors. The discrepancies they identified were tabulated and counted. There were 39 different discrepancies identified. We evaluated the probability of discrepancy identification, and whether more time spent or greater participant experience as academic authors improved the ability to detect discrepancies.

Results: Overall, 95.3% of discrepancies were missed. Most participants (62%) were unable to find any discrepancies. Only 11.5% noticed more than 10% of the discrepancies. More discrepancies were noted by participants who spent more time on the task (Spearman’s ρ = 0.22, P < 0.01), and those with more experience of publishing papers (Spearman’s ρ = 0.13 with number of publications, P = 0.04).

Conclusions: Noticing discrepancies is difficult. Most readers miss most discrepancies even when asked specifically to look for them. The probability of a discrepancy evading an individual sensitized reader is 95%, making it important that, when problems are identified after publication, readers are able to communicate with each other. When made aware of discrepancies, the majority of readers support editorial action to correct the scientific record.

Related Topics

    loading  Loading Related Articles