News

Measuring the impact of R&D spending

Does pouring money into research always translate into better outcomes?

  • Myles Gough

Better connections between scientists and industry could help countries reap more rewards from research.
Credit: Mick Wiggins/Alamy Stock Photo

Measuring the impact of R&D spending

6 October 2016

Myles Gough

Mick Wiggins/Alamy Stock Photo

Better connections between scientists and industry could help countries reap more rewards from research.

Does pouring money into research always translate into better outcomes?

At the OECD’s once-in-a-decade Blue Sky Forum held in Belgium in mid-September, science policy analysts and data users met to discuss the complex issues around measuring the output and impact of science, technology and innovation. They called for new indicators that captured the movements of scientists and the flow of knowledge, and better metrics to assess innovation. And they discussed improved ways to leverage existing data to inform science and technology policy.

The forum’s objectives were prudent. Science policy in much of the industrialised world is largely underpinned by the belief that research and development investment will yield economic growth. The major indicator used to measure the intensity of this outlay is a country’s gross domestic expenditure on research and development (GERD), as a percentage of its GDP. The OECD average is 2.38%, which includes government, university and private investment. The European Union has a target of 3% by 2020.

There have been a number of rigorous studies showing direct relationship between spending on research and scientific output, says Roy Green, dean of the business school at the University of Technology Sydney. For instance, a study by researchers from the UK and Saudi Arabia looked at 40 Asian countries and found that countries that spent more on R&D, and had more universities and indexed journals, produced more high-quality research publications across the sciences and social sciences. Research in the UK has shown that public investment in R&D increases private sector investment in science and attracts foreign investment.

But, Green says, a link between R&D spending and the translation of that research into commercial outcomes is less clear. “There the picture is much more mixed,” he says.

Australia's problem

Australia is a fine case study. It has a record of producing high-quality research publications – it was 12th in the Nature Index 2015 Global ranking – but struggles to reap commercial and wider socio-economic rewards. In the 2016 Global Innovation Index (GII) Australia ranked 19th overall. It was 11th in research inputs, but 73rd in innovation efficiency ratio, which measures how much output or innovation success a country has had relative to its inputs, such as knowledge creation and number of researchers.

Part of the problem lies in the transfer of knowledge from campuses to companies. In 2013, Australia ranked 29th out of 30 OECD countries on industry collaboration with universities and public research organisations.

Australian academics are evaluated and promoted based on their publication record, Green says, and these flawed incentives have dissuaded them from engaging with industry. Science Foundation Ireland – Ireland’s statutory body overseeing investment in scientific and engineering research – encourages the opposite, he says. It funds basic and applied research and brings together the public and private sectors. According to the 2016 Global Innovation Index, Ireland ranks 28th in producing citable documents (h-index) but 8th in innovation efficiency.

Time lag

David Popp, who studies the economics of technological change and is professor of public administration and international affairs at Syracuse University in the United States, says a major challenge with measuring the scientific output from R&D investments is timing: “Because the measurable outputs of the scientific process don’t appear immediately after funding, teasing out the causal effect of any specific R&D investment is difficult.”

His research into the renewable energy sector in the United States found that patent applications began three years after funding and continued for up to 15 years. In medicine, the lag from initial public R&D investment to the development of new drugs can exceed two decades, according to a 2011 report on the pharmaceutical industry by Andrew Toole, an economist with the Economic Research Service of the United States Department of Agriculture.

Another challenge is selection bias in funding schemes. People who receive grants from the government publish papers afterwards, Popps says, but he questions whether the government grants helped them. “Or did they get the grant because they are star researchers who likely would have published anyway?” he asks.

The definition of scientific output is another vexed issue for the scientific community. Traditionally, funding agencies and governments have looked at what can be counted, or reported anecdotally: papers, citations, patents, and success stories. Some academics, including New York University economics professor Julia Lane, have suggested measures that more accurately account for research training, human networks, idea transferring, and even research failures need to be developed to account for the true value of the scientific process.