For the sake of science, it’s time to break ranks

Researchers call for a change in evaluation to recognise the importance of reproducibility.

26 July 2018

Anja Krieger

Gallo Images / Contributor / Getty

Big data presents big challenges for reproducibility.

Bibliometric indicators that reward scientists for publishing frequently in high-ranked journals — but not for making their methods accessible — are a major cause of the reproducibility crisis, researchers agreed at the latest EuroScience Open Forum (ESOF).

Criticism of ‘excellence funding’ based on numerical rankings echoed across several panels at ESOF, held in July in Toulouse, France, with participants from across the disciplines. “We are killing science because of the numerical ranking,” said radio astronomer, Lourdes Verdes-Montenegro, from the Institute of Astrophysics of Andalusia (IAA-CSIC) in Spain. “We are looking for the high rankings instead of the good science, so productivity seems more important than discovery.”

Carole Goble, a computer scientist at the University of Manchester, said creating reproducibility is difficult “because you are writing for strangers”. It’s a burden, she said, which is not very well resourced, recognized, or rewarded. A study and its results can only be reproduced if they contain the full recipe of production, including a complete description of the methods, along with access to data and code that was used. For research based on large-scale digital infrastructures, the requirements as cited by Goble are even more onerous, including access, rich descriptions, transparency, portability, preservation and robustness.

“We need to really value the groundwork of science, and get over that, 'If it’s not in Nature, you don't get promoted,'” said Goble. May Chiao, the chief editor of Nature Astronomy, published by Springer Nature, acknowledged that journals played a part in “propagating the publish-or-perish culture that may lead to rushed results”. But Chiao argued that journals were part of the solution as well. She listed measures Nature journals practiced to improve reproducibility, such as checklists for greater transparency and data sharing and the provision of more space for methods, additional information and replication studies. (Springer Nature also publishes Nature Index.)

René von Schomberg, who leads the European Commission’s Open Science Policy Coordination and Development team, and develops next-generation metrics for open science, took the stance that Open Science was key to solving the current crisis. With early sharing of data, for example, research would become more transparent and verifiable.

But there aren’t many incentives to practice Open Science at the moment, the forum heard. Sebastian Neubert, from the Heidelberg University, who is researching strongly interacting particles at the Large Hadron Collider, argued that methods and tools that enable reproducibility should get bigger weight in rewards.

Verdes-Montenegro warned that “Facilities producing big data might make things worse in terms of reproducibility, unless we take actions and really work with rigorous scientific methodology.” She coordinates the Spanish participation in the Square Kilometer Array (SKA), which is being built in Australia and South Africa and will become the world’s largest radio telescope. With thousands of antennas and scientists involved, the daily data flow in the SKA network will surpass current internet traffic of an entire year.

But, even on a smaller scale, in disciplines dealing with personal information, such as sociology, sharing of data to allow studies to be replicated can be difficult on privacy grounds, Jeff Dozier from the University of California, Santa Barbara, explained. In a village in the Arctic, for example, even specifying age and gender might identify an individual and make anonymization impossible.


Research Highlights of partners

Return to 'News'