Rush to publish weakens scientific integrity, study finds
A new model suggests bias towards novel results is driving down reproducible research.
18 October 2017
Blend Images / Alamy Stock Photo
A bias by journals towards studies with positive findings is undermining the trustworthiness of published science, a new mathematical model suggests.
The findings published on the bioRxiv preprint server are a first step in measuring how factors such as competition for funding and bias towards novel findings influence the quality of science published in top-tier journals.
Publishing research in high-impact journals is integral to climbing the career ladder, and the pressure is intensifying as researchers compete for dwindling funding.
But the rush to publish findings may be eroding the quality of research, the study’s researchers have found. They refer to a growing concern that scientific research is in a reproducibility crisis, meaning that many published studies cannot be replicated.
“While we know dubious research practices are common, we have no idea of how to quantify just how much,” David Grimes of Queen’s University Belfast says. “This simple model could help demonstrate why our obsession with publication may have a dark side.”
To find out which factors shape the trustworthiness of published research, Grimes and colleagues constructed a model of how researchers fare when funding is awarded based on the number of papers published. The theoretical model categorised researchers into diligent, careless and ethical groups to see how different behaviours gain advantage under various conditions.
The authors considered the effects of biases from different sources.
The authors found that journals were more likely to select dubious research when funding was scarce. In this scenario, increased competition leads to more irreproducible science being published than diligent research.
Their theoretical model found that when false positives were common in a specific field of research, unethical and careless researcher categories were allocated more funding than the diligent groups.
When the authors tweaked the journal bias towards publishing only positive results, the model showed that fraudulent and falsely positive research was likely to be rewarded at the expense of reproducible science.
To help boost the scientific rigor of published research, the authors propose cracking down on fraud detection, reducing journal bias and rewarding diligence over positive results.
Stephen Woodcock of the University of Technology, Sydney (UTS), says that changing incentives and moving towards unbiased publishing may still have its drawbacks.
Woodcock argues that a more egalitarian model may create another perverse incentive for unscrupulous researchers to reproduce the same studies, knowing that a lack of novelty is no barrier to gaining rewards.
“Some people are just really good at unscrupulously chasing the incentives wherever they are,” Woodcock told the Nature Index. “If the goalposts move, their behaviour will move accordingly.”