Personal biases speed up research publication

Megajournal editors under the microscope.

8 October 2019

Gemma Conroy


Editors working at ‘megajournals’ are quicker to publish articles by familiar authors who cite the editor’s work, according to an analysis of nearly 142,000 PLOS ONE papers.

Alexander Petersen, lead author and computational social scientist at the University of California, Merced, says that the findings suggest that editors at megajournals – large open-access publications that publish higher volumes of research than traditional journals – may be more susceptible to bias and conflicts of interest than editors at smaller journals.

“Apathy and misconduct frequently emerge in power-driven environments,” says Petersen. “A much larger danger is in how the public may perceive such behavior, especially at a time in which we need the public’s trust in scientific guidance.”

Since emerging a decade ago, megajournals have accounted for almost 2% of published scientific research. In 2012 alone, the world’s largest megajournal, PLOS ONE, published more than 23,000 articles, which was 1.4% of all articles indexed by the Web of Science. The fast-growing journal relies on an editorial board of almost 7,000 academics.

Petersen says that unlike traditional large journals, such as Proceedings of the National Academy of Sciences (PNAS), the PLOS ONE editorial board lacks structured oversight and that editors are “given fairly free reign to personally scale up their own activity”.

He examined the activities of 6,934 PLOS ONE editors using data on the handling editor, manuscript, authors and citations of 141,986 papers published between 2006 and 2015. He compared PLOS ONE editors with those working at PNAS and Management Science. The findings were published in the Journal of Informetrics.

When Petersen narrowed down the dataset to 100 of the most prolific PLOS ONE editors, he found that this small group oversaw 12.2% of all articles analysed over the 10-year period.

Among these highly productive editors, 85 handled more papers each year than the most prominent editor at PNAS, who worked on 22 articles a year on average.

Although PLOS ONE takes an average of 18 weeks to accept an article for publication, some editors accept articles much more quickly, particularly when handling the manuscripts of previous collaborators and repeat authors.

Petersen found that most PLOS ONE editors take 19 less days on average to accept an article written by a former co-author. When handling papers by repeat authors, the most active editors take just six weeks on average to accept the article for publication.

Authors who cited the work of their editors at least once in their manuscripts also enjoyed a speedier acceptance process. This was particularly apparent among the most active editors, who took 18 fewer days to accept articles with at least one citation to their own work compared to other articles they worked on.

Petersen also found high self-citation rates for three excessively productive editors. Articles accepted by these editors were cited 22% less than articles handled by other PLOS ONE editors. Petersen said the three were no longer PLOS ONE editors.

Petersen says that the results highlight the need for megajournals to tackle questionable behaviour in over-active editors, such as limiting the number of papers each editor handles.

“This is likely to make editors more selective in the scientific research that they facilitate publishing,” says Petersen.

Joerg Heber, editor-in-chief at PLOS ONE, points out that the journal has taken steps to strengthen editorial standards and make the peer review process more transparent in the years following Petersen’s study period, including improving oversight of the editorial board and removing editors “that did not meet our high standards of manuscript handling.”

“Our staff editors work closely with our editorial board members on matters related to editorial decisions and peer review of individual manuscripts,” Heber writes in an email. “We strongly believe in the benefits of transparency and would encourage other journals to also surface their processes as transparently, as it is likely that such behaviour has occurred elsewhere, too.”

Read next:

How researchers can improve the quality of systematic reviews

Frequent collaborators are less likely to produce replicable results

"There is a problem": Australia's top scientist Alan Finkel pushes to eradicate bad science


Research Highlights of partners

Return to 'News'