Research needs less 'excellence', more competence

20 October 2016

COMMENT

Adrian Barnett

Brain light/ Alamy Stock Photo

The research sector loves talking about excellence but is it best for science?

It is almost impossible to work in research without hearing the word excellence. Universities use it in their mission statements and funding agencies name programmes after it. The word even makes its way into institution titles, such as Germany’s Clusters of Excellence and the Australian Research Council (ARC) Centres of Excellence.

In a forthcoming paper, a group of open access researchers and advocates, have taken a sharp look at the science world’s pervasive use of the word. They go so far as to call it a fetish and conclude that it’s having negative consequences for research. “Excellence is not excellent, it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship,” write Samuel Moore, Cameron Neylon, Martin Paul Eve, Daniel O'Donnell and Damian Pattinson.

How can a positive term be considered so damaging? For starters, while it is true that researchers regularly come across the word ‘excellence’, the term doesn’t mean the same thing to all people. Excellence could mean working with great teams, achieving the highest standards or making a fundamental shift for the better, such as saving thousands of lives by overturning a routine practice. For many researchers, the word brings to mind the annual, and often painful, exercise of applying for funding and having to write about how their own work is ‘excellent’. Not all researchers are natural self-promoters.

Even national agencies struggle to define the phrase. The ARC runs the Excellence in Research Australia (ERA) exercise to benchmark the country’s universities. Research quality is rated on a 1-to-5 scale from “well below world standard” to “well above world standard”. But there is no direct definition of excellence. Rather, it is defined in terms of what everyone else is doing. In practice, nobody knows what the “world standard” benchmark means. When the ERA ratings are released, university staff scour the results to see how many 4s and 5s they managed, and how many their rivals received.

If excellence in research exists, it should be possible to see it in data. For example, the highest ranked grant applications should be the most productive research projects. But when researchers examined grants funded by the United States National Institutes of Health they found only a weak association between a grant’s rating and what it produced.

This is not surprising given scientific research is inherently unpredictable, otherwise we wouldn’t need it. A review of clinical trials, published in Nature, found that negative trials – those where the new treatment was no better than the standard – occurred 40–50% of the time.

Although negative and positive trials are equally valuable to science, a positive trial is more valuable to a researcher’s career as it will be easier to publish in a top journal, which is a frequently used metric of excellence. When we reward excellence based on journal impact, we are somewhat rewarding luck.

The authors of the excellence paper suggest it would be better to focus on good research practice. For instance, a project would be judged by whether the researchers sought to answer a worthwhile question, planned and executed the study by defined standards and wrote up the results clearly and honestly. In this kind of system, excellence would be defined by how results were made, rather than by what they found.

Rewarding research based on competence would take the heat out a system that has become hyper-competitive as more researchers compete for limited funding, which drives some scientists to spin their results to appear more positive. A smaller number commit fraud.

Celebrating competence over excellence is a hard political sell. Funding agencies and universities want to celebrate “excellent” research that has changed people’s lives, and this is welcome. These examples spark the public’s imagination and provide political capital for science. But inside the research world, instead of just focusing on positive, if somewhat fortunate, discoveries, we must also recognise that science is a methodical process that sometimes discovers an important failure.

Dr Adrian Barnett is a statistician who works in meta-research at the Queensland University of Technology, Brisbane. He tweets @aidybarnett

Research Highlights of partner institutions:

Return to 'News'