TOP Factor rates journals on transparency, openness
New tool seeks to change editorial practices.
18 February 2020
A new journal rating system aims to encourage scientific editors and publishers to rethink —and, in many cases, dramatically overhaul—their commitment to transparency and reproducibility.
“Everybody hates the impact factor,” says Brian Nosek, executive director of the advocacy organization Center for Open Science which has created the system known as TOP Factor.
“But everyone recognizes that we’re [beholden] to it. This is the first step to a more mature evaluation process for journals.”
The TOP Factor attempts to capture elements of journal quality that aren’t considered by standard metrics such as impact factor, which is solely based on citations, says Nosek, a psychologist at the University of Virginia in Charlottesville where the Center for Open Science is based.
Journals are scored based on ten different criteria, including availability of data and policies on preregistration, the process of formally registering a research plan before a study begins.
The accompanying graph shows the distribution of scores among journals. Mouse over for further information.
The scoring system, based on the Transparency and Openness Promotion Guidelines released in 2015, awards journals zero to three points for each measure. For example, a journal that requires authors to disclose whether or not their data are available will score just one point for data transparency.
A score of two means that data are generally available with some exceptions.
A score of three, which is rarely achieved, means that the journal independently checks the data to make sure it corresponds with the reported results.
The TOP Factor addresses urgent issues in scientific publishing, says Tom Hardwicke, a psychologist at the Berlin Institute of Health who studies research practices and editorial policies. “The state of transparency in the scientific ecosystem is dire,” he says. But he adds that it’s too early to say if the TOP Factor or similar approaches can meaningfully change editorial practices.
Even if journals announce policy changes to improve their scores, he warns that the scientific community should remain vigilant to make sure the policies are actually put in practice and have the desired effects on transparency.
Nosek notes that different fields have different standards for transparency, making across-the-board comparisons potentially misleading.
For example, preregistration is relatively common in psychology, so journals in that field are likely to score well on that measure. Total scores, he says, are most meaningful when used to compare journals from similar fields.
If editors see that their journal is lagging behind their competitors, they could enact new policies to keep pace. “We’re trying to raise the floor,” he says.
At present, the floor couldn’t be lower. Of 250 journals that have been assessed according to TOP Factor, 40 failed to score a single point on any measure.
“There’s no good reason for a journal to be at zero,” Nosek says. “Any journal with a score of six or better is at least taking [transparency and reproducibility] seriously to some degree.”
Several psychology journals have scores in the 20s, but Nosek says such a score would be “extremely aspirational” for journals that cover multiple fields such as Cell or Nature, which each scored an 11.
“There won’t ever be a singular solution” for rating journals, Nosek says, but the TOP Factor at least gives some credit where it’s due.
“There are good actors that are progressively moving toward openness and transparency,” he says. “We ought to be celebrating them.”