A recent study in the New England Journal of Medicine, authored by six researchers at the National Heart, Lung, and Blood Institute (NHLBI), prompts some thoughts about studies with negative outcomes—and their importance in the entire research process.
In this report Dr. David Gordon, Dr. Michael Lauer, and their colleagues analyzed the 244 extramural, randomized clinical trials supported by NHLBI and completed between the years 2000 and 2011. The primary outcome was the time between completion of trials and publication of the main results in a peer-reviewed journal; the secondary outcome was the annual citation rates for these articles—i.e., how many times each article was cited in a given time period. The team also examined a number of trial characteristics that related to these questions, such as budget, number of participants, and whether the result was positive or negative.
Among the many interesting findings are that more than half of the studies analyzed (58 percent) yielded negative results. And intriguingly, of the 31 trials having the highest citation rates, only 8 (26 percent) had positive results. Studies supported by NHLBI, and indeed, studies supported by NCCAM, generally start with enthusiasm of the investigators, peer reviewers, and NIH. They generally start with the expectation (and indeed preliminary data) that the intervention being studied has the potential to improve patient outcomes. By and large, when no benefit is demonstrated, research teams are understandably disappointed. And Gordon and co-authors found that investigators completing negative studies are indeed significantly slower to publish.
Nevertheless, we do the research because we don’t know the answer! Negative studies are just as important to consumers as positive studies. They are essential blocks in the evidence base. They help everyone—consumers and health care providers—avoid interventions that don’t help.
There is an additional “silver lining.” Negative studies are extremely important in the research process. And the high-quality data produced during our well-performed, carefully monitored studies are of enormous value in deciding on follow-on questions and in the design of subsequent studies.
We learn from surprises—from discovering that we don’t always know what we think we know.