Frequent collaborators are less likely to produce replicable results
Stick with who you don't know.
20 September 2019
Scientists who collaborate with the same groups of researchers across multiple studies are less likely to produce robust and replicable results than those who opt to broaden their horizons and team up with someone new.
Both researchers and funders know the value of seeking expertise beyond national and institutional borders. Today, more countries are taking part in cross-border partnerships than ever before. Between 2000 to 2015, the number of papers with international co-authors more than tripled, from 136,483 papers to 418,866.
But finding a team on the other side of the world and sticking with them isn’t always the best idea. Seeking out fresh eyes and a diverse set of expertise can help ensure the quality of the research, a new study has found.
James Evans, a sociologist at the Santa Fe Institute in the United States and senior author of the study published in eLife, says that continuing to collaborate with the same network of colleagues can “mask and propagate fragile findings.”
The study calls for policies that promote non-repeated collaborations and diverse approaches to answering research questions.
Robots testing replication
Evans and his colleagues examined the results of 3,363 published studies on 51,292 drug-gene interactions listed in the Comparative Toxicogenomics Database, and recorded those with supporting and opposing findings. Drug-gene interactions are where certain gene variations may put a patient at risk of a non-response or over-response to medications, including adverse reactions and drug toxicity.
The researchers compared these findings with results listed in the LINCS L1000 data repository, which contains close to two million gene expression profiles for thousands of drugs. The LINCS L1000 experiments were performed by artificial intelligence programs.
When they looked at thousands of suppporting findings that were produced by authors who had not previously worked together, they found them to be 55% more likely to be replicated in L1000 experiments than those produced by frequent collaborators.
Frequent collaborators were also more likely to confirm each other’s results and use similar methods, which could mask the problem and make it worse.
When the team investigated widely supported claims versus results in single studies, they found that almost half of the drug-gene interactions reported in multiple studies matched those produced by the L1000 program. In contrast, less than a quarter of single-study results were able to be verified.
To improve the reproducibility of scientific research, says Evans, funders need to establish policies that favour independent replication studies.
“Large institutions, such as the US National Institutes of Health, are often quick to establish a single protocol,” he says. “Our work advises them to broaden that.”