Underreporting of trial results is a major cause of waste in research and a barrier to evidence synthesis. Our research demonstrated that researchers should go beyond published reports; searching registries could identify new trials in half of the systematic reviews published, with results posted in 23% in registries and changes in effect estimates up to 29%. We also investigated the availability of trial results from other sources such as Clinical Study Data Request [CSDR]. Nevertheless, despite the requirement by the US law (FDAAA) to post trial results, we showed that results for nearly half of cancer drug trials were not publicly available 3 years after completion of the trials. To overcome this waste in research, we demonstrated, in a pragmatic cohort-embedded randomized controlled trial [RCT], that simple low-cost interventions such as sending emails to remind sponsors of trials of the legal requirements to post trial results significantly improved the posting of results at 6 months.
Meta-analyses are widely used to summarize information into a unique estimate. Our research, based on meta-epidemiologic studies, showed that several factors — single center trials, sample size, the lack of or retrospective registration, type of outcome (surrogate outcome), choice of the measure of treatment effect — as well as the risk of bias domains (allocation concealment, blinding etc.) are associated with overestimated treatment effects. In contrast, we did not find any difference in treatment effect estimates by overall risk of bias, which questions the current definition of the overall risk of bias. Our results questioned the current strategy of including all available evidence in meta-analyses and showed that treatment effect estimates were larger with meta-analysis of all trials than with other strategies such as the single most precise trial, meta-analysis of the largest trials, and limit meta-analysis.
Network meta-analyses are increasingly used to determine the best available healthcare interventions among all existing treatments for one specific disease. We showed that the conduct, reporting and interpretation of network meta-analyses was questionable. Particularly, the search strategy, assessment of risk of bias and publication bias were inappropriate in most publications. Furthermore, we showed that treatment rankings usually reported with no credibility interval have a substantial degree of imprecision, with a 50% or greater probability that the best-ranked treatment was actually not the best in 28% of networks. We also investigated whether conventional meta-analyses meet clinicians’ and patients’ needs. Our results revealed a substantial waste of research, with more than 40% of treatments, treatment comparisons, and trials not covered by existing systematic reviews. We developed a new paradigm, the “live cumulative network meta-analysis”, a single network meta-analysis covering all treatments, whereby the network meta-analysis is systematically updated as soon as the results of a new trial become available.