Funding debate over paper quality vs quantity

Researchers disagree over whether performance-based metrics adversely affect publication behaviour.

21 September 2017

Dyani Lewis

Piotr Malczyk / Alamy Stock Photo

New analysis of Australian data adds fuel to the argument over the effects of linking university funding to publication output. The study, led by researchers in the Netherlands, challenges earlier assertions that allocating funding based on paper counts rewards quantity over quality.

In 1995, the then Australian Department of Employment, Education and Training began incorporating publication data — along with research income and postgraduate student numbers — in the formulae used to allocate funding for research and training at Australian universities. Few countries, most of them in Europe, have a national evaluation system to determine university funding, and fewer still — New Zealand, Spain, Norway and Belgium — incorporate publication metrics.

In 2003, Linda Butler, who led the Research Evaluation and Policy Project at the Australian National University, conducted an analysis of how the policy had affected the research community. Her findings suggested that while Australia’s share of publications in the Science Citation Index had increased, the impact of those publications — measured as Australia’s share of global citations relative to its share of publications — had declined, as researchers opted to publish in lower-impact journals. The Norwegian system, introduced in 2005, was designed to counter these adverse effects by weighting publications by quality and type.

The latest reanalysis of the data casts doubt on those conclusions. According to the new study by Peter van den Besselaar of Vrije Universiteit Amsterdam and colleagues, the overall impact of Australian publications rose in response to the policy measure. Australia’s share of highly cited papers — those in the top 10% — also steadily increased.

Peter van den Besselaar

“It is not the case that there was a lot of low-level output," says van den Besselaar. “If you're stimulated to do more, then you try to do more, but also to do better, and that is what we find,” he says.

The new analysis was published in the Journal of Informetrics along with a rebuttal by Butler, who declined to comment for this report. In the rebuttal, she argues that the new findings are negated by differences in methodology and a misinterpretation of when the new policy was likely to affect researcher behaviour.

Australia dropped publication counts from its government funding formulae in 2017. “It achieved what it was meant to do, which was to encourage a rise in productivity and publications by academics,” says Conor King, executive director of Innovative Research Universities, a body that advocates on behalf of six Australian universities.

King was on the advisory committee for a government-initiated review into research funding in 2015, which recommended dropping the publication component of the funding formulae. The review argued that “publications are not an effective measure of the research training environment within a university,” and therefore should not be used to determine funding for research training.

Some researchers doubt that evaluation metrics have any effect on the habits of researchers. Their low weighting — just 10% in Australia — and their use in determining university-wide rather than individual funding mean that any incentives to game the system effectively disappear. “There is no immediate consequence for the researcher,” says Jochen Gläser, a former member of Butler’s evaluation team, now at Technische Universität Berlin. “It's very difficult to establish causality.”

Van den Besselaar sees no good reason to remove publication count from the assessment, but concedes that its removal will probably have little effect. “Removing it now won’t change the competitive culture of the Australian system,” he says.


Research Highlights of partners

Return to 'News'