Australian funding agency turns a blind eye to evidence
Ignoring evidence when deciding where research funding goes will not lead to high-quality research.
7 March 2017
Nature Picture Library / Alamy Stock Photo
Australia’s main science funder is not taking an evidence-based approach in reforms to its system of funding allocation. Instead it is courting expert opinion, the lowest level of evidence quality, to guide its decision.
The National Health and Medical Research Council (NHMRC) in Australia funds research for the benefit of human health, usually by providing financial resources for scientists to collect data, run experiments and examine the evidence.
But, in trying to find the best way to allocate funds, the NHMRC has so far failed to cite any published experiments on funding. A 64-page consultation paper released last year that proposed three alternative models — funding individuals, teams or ideas — did not include a single citation to published work.
More than 300 expert opinions were summarised and published last week. Not surprisingly, “the feedback provided was diverse with no clear preference for one of the alternative models”.
The review, prompted by rising application numbers, falling success rates and plummeting morale amongst researchers, is welcome. But a summary of the current evidence may have led to a more useful discussion.
Studies have been carried out that could inform an evidence-based discussion of funding policy, and are directly pertinent to the three proposed models. For example, a study of a new funding system in France examining more than €10 billion in research funding over five years, found that funding younger applicants had a much larger impact on research output.
Another study in Canada revealed that it was more effective to divide funding into multiple small grants rather than large grants. Two $1 million dollar grants generally produce more research than one $2 million dollar grant. Other studies have examined the important question of whether current funding systems discourage innovative applications.
There is even evidence from within the NHMRC. Our research group worked with the NHMRC in 2013 to conduct a randomised trial to examine the level of agreement between independent peer reviewers of grant applications and found concurrence was higher when reviewers examined people compared with projects.
This has a clear policy implication, suggesting a strong case for putting more money into people as that money would more reliably go to high-quality research.
Without systematically examining the evidence we are left to rely on expert opinion, which can be wrong. For example, many have predicted that a funding round with no deadlines would create an avalanche of applications. But a recent trial by the National Science Foundation in the United States found a 59% reduction in applications.
The studies of funding mentioned here are the tip of the iceberg. A group in the United Kingdom recently searched for published evidence on the best designs for funding systems and found more than 1,700 references.
Ignoring evidence means we risk changing Australia's national funding policy to a system that has already been trialled and failed elsewhere. The opinions of the research community are important when making such a big change, but researchers also appreciate seeing published evidence.
Professor Adrian Barnett is a statistician who works in meta-research at the Queensland University of Technology, Brisbane. He tweets @aidybarnett
Response from NHMRC CEO Anne Kelso:
The Structural Review of NHMRC’s Grant Program has been informed by a range of evidence. We have read with interest papers about research funding by Professor Barnett and his colleagues, the papers cited in their submission to the review and many others. The review has also considered health and medical research funding systems in other countries, including reforms of those systems.
Following an extensive public consultation and advice from an Expert Advisory Group, the NHMRC is now considering whether to change its grant program structure to meet the aims of the review. These are to reduce the burden on researchers of application and peer review, encourage greater creativity and innovation, and provide opportunities for talented researchers at all career stages.