Luck of the draw

Funders should assign research grants via a lottery system to reduce human bias.

7 May 2018

COMMENT

Dorothy Bishop

Jonathan Kitchen/Getty

Dorothy Bishop

Research rankings by committee or peer review are notoriously unreliable. The same grant application that succeeds in one round might be rejected in the next. A fairer outcome could be achieved through a lottery system. Relying on the luck of the draw would reduce the influence of implicit human biases, such as risk-averse behaviour, and preferences for certain genders and races, on funding decisions.

In such a model, a committee would scrutinise proposals to ensure that they met the funder’s remit, and were of high methodological quality. Proposals that met these minimum requirements would be put into a pool to be selected at random.

I floated this idea in a Twitter poll in April, which brought 1,060 responses within 24 hours. Most respondents approved of a lottery approach, regardless of their funding status, with 66% in favour and 34% against.

Funding lottery

As is often the way with Twitter, the poll encouraged people to point me to existing literature I had not been aware of. In particular, last year, Mark Humphries, a computational neuroscientist at the University of Nottingham, made a compelling argument for randomness in funding allocations, focussing on the expense and unreliability of current peer review systems.

Shahar Avin at the University of Cambridge has recently conducted a detailed scholarly analysis of policy implications for random funding, in the course of which he mentions three systems where this has been tried: the Volkswagen Foundation’s Experiment! grants, the Health Research Council of New Zealand's Explorer Grants, and New Zealand’s Science for Technology Innovation SEED projects.

In another manuscript, Avin presented a computer simulation, which compares explicit random allocation with peer review. The code is openly available, and the results from the scenarios modelled by Avin suggest that including an element of randomness in funding increases innovation.

Readers might also be interested in a simulation by researchers in Italy of the effect of luck on a meritocracy. While their analysis is not specific to research funding, it has some relevance. The authors conclude: “Almost never the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals.”

Tweeters pointed to even more radical proposals, such as collective allocation of science funding, giving all researchers a limited amount of funding, or yoking risk to reward.

Having considered these sources and a range of additional comments, it would be worth a funder such as the Wellcome Trust leading a trial of random allocation of funding for proposals that meet a quality criterion.

As tweeted by Dylan Wiliam, the key question is whether peer review does indeed select the best proposals. To test this, those who applied for seed funding could be randomly directed to either stream A, where proposals undergo conventional evaluation by committee, or stream B, where the committee engages in a relatively light-touch process to decide whether to enter the proposal in a lottery, which then decides its fate. Streams A and B could each have the same budget, and their outcomes could be compared a few years later.

I’d recommend this approach specifically for seed funding because of the disproportionate administrative burden for small grants. There would, in principle, be no reason not to extend the idea to larger grants, but I suspect that the more money is at stake, the greater will be the reluctance to include an explicit element of chance in the funding decision. And, as Avin noted, very expensive projects need funds committed over more than one funding cycle, which makes a lottery approach unsuitable.

Impartial judge

Some of those responding to the Twitter poll noted potential drawbacks of a lottery approach. Hazel Phillips, chief operating officer at the National Institute for Health Research's Bristol Biomedical Research Centre, suggested that random assignment would make it harder to consider strategic issues, such as a researcher’s career stage or the importance of a topic. This could be addressed by creating a separate pool for a subset of proposals that meet additional criteria and giving them a higher chance of funding.

Another concern was that institutions or individuals could game the system by submitting numerous proposals in scattergun fashion. Again, I don’t see this as a relevant objection. The initial filter would weed out proposals that were poorly motivated, and applicants could be limited to one proposal per round.

Many respondents expressed concerns about the initial triage: how would the threshold for entry into the pool be set? In practice, it would be feasible to develop transparent criteria for determining which proposals don’t get into the pool. These could include proposals with methodological limitations, which can’t give a coherent answer to the question they pose, ill-formed research questions, or investigations of questions that have already been answered adequately. A blogpost by Paul Glasziou and Iain Chalmers makes a good start in identifying characteristics of research proposals that should not be considered for funding.

There are advantages to the lottery approach that transcend cost savings. Avin’s analysis concludes that reliance on peer review leads to a bias against risk-taking. Leaving the decision entirely to chance, would mean that researchers are not discouraged from submitting novel and creative ideas. Once a proposal is in the pool, there would also be no scope for bias against researchers in terms of gender or race — a particular concern when relying on interviews to assess grants.

Marina Papoutsi, a cognitive neuroscientist at University College London, noted that some institutions evaluate their staff in terms of how much grant income they bring in — a practice that ignores the role of chance in current funding practices. A lottery approach, where the randomness is explicit, would put paid to such practices.

Dorothy Bishop is a professor of developmental neuropsychology at the Department of Experimental Psychology, University of Oxford. This article was originally posted on her blog.

Tags:

Research Highlights of partners

Return to 'News'