Luck of the draw
Funders should assign research grants via a lottery system to reduce human bias.
7 May 2018
Research rankings by committee or peer review are notoriously unreliable. The same grant application that succeeds in one round might be rejected in the next. A fairer outcome could be achieved through a lottery system. Relying on the luck of the draw would reduce the influence of implicit human biases, such as risk-averse behaviour, and preferences for certain genders and races, on funding decisions.
In such a model, a committee would scrutinise proposals to ensure that they met the funder’s remit, and were of high methodological quality. Proposals that met these minimum requirements would be put into a pool to be selected at random.
I floated this idea in a Twitter poll in April, which brought 1,060 responses within 24 hours. Most respondents approved of a lottery approach, regardless of their funding status, with 66% in favour and 34% against.
Interesting poll results: at first it looked as if people were more favourable to random allocation of research funds if they were currently funded, but final result indicates general 2:1 in favour of random pic.twitter.com/KPV9Xrr5tg— Dorothy Bishop (@deevybee) April 7, 2018
As is often the way with Twitter, the poll encouraged people to point me to existing literature I had not been aware of. In particular, last year, Mark Humphries, a computational neuroscientist at the University of Nottingham, made a compelling argument for randomness in funding allocations, focussing on the expense and unreliability of current peer review systems.
Shahar Avin at the University of Cambridge has recently conducted a detailed scholarly analysis of policy implications for random funding, in the course of which he mentions three systems where this has been tried: the Volkswagen Foundation’s Experiment! grants, the Health Research Council of New Zealand's Explorer Grants, and New Zealand’s Science for Technology Innovation SEED projects.
In another manuscript, Avin presented a computer simulation, which compares explicit random allocation with peer review. The code is openly available, and the results from the scenarios modelled by Avin suggest that including an element of randomness in funding increases innovation.
Readers might also be interested in a simulation by researchers in Italy of the effect of luck on a meritocracy. While their analysis is not specific to research funding, it has some relevance. The authors conclude: “Almost never the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals.”
Tweeters pointed to even more radical proposals, such as collective allocation of science funding, giving all researchers a limited amount of funding, or yoking risk to reward.
Having considered these sources and a range of additional comments, it would be worth a funder such as the Wellcome Trust leading a trial of random allocation of funding for proposals that meet a quality criterion.
As tweeted by Dylan Wiliam, the key question is whether peer review does indeed select the best proposals. To test this, those who applied for seed funding could be randomly directed to either stream A, where proposals undergo conventional evaluation by committee, or stream B, where the committee engages in a relatively light-touch process to decide whether to enter the proposal in a lottery, which then decides its fate. Streams A and B could each have the same budget, and their outcomes could be compared a few years later.
Sorry to duck the issue but I think the crucial point here is whether there is evidence that the higher rated proposals are, in fact, better. I argued for something similar in admission to medical education some years ago. Set a threshold, and select at random from those above it— Dylan Wiliam (@dylanwiliam) April 6, 2018
I’d recommend this approach specifically for seed funding because of the disproportionate administrative burden for small grants. There would, in principle, be no reason not to extend the idea to larger grants, but I suspect that the more money is at stake, the greater will be the reluctance to include an explicit element of chance in the funding decision. And, as Avin noted, very expensive projects need funds committed over more than one funding cycle, which makes a lottery approach unsuitable.
Some of those responding to the Twitter poll noted potential drawbacks of a lottery approach. Hazel Phillips, chief operating officer at the National Institute for Health Research's Bristol Biomedical Research Centre, suggested that random assignment would make it harder to consider strategic issues, such as a researcher’s career stage or the importance of a topic. This could be addressed by creating a separate pool for a subset of proposals that meet additional criteria and giving them a higher chance of funding.
How do you determine passing the quality threshold? Don’t rely on reviewers scores, you need to take their comments into account. Random assignment makes it harder to give strategic uplift eg to ECRs or a topical subject (though IME boards find the latter hard)— Hazel Phillips (@corylus) April 7, 2018
Another concern was that institutions or individuals could game the system by submitting numerous proposals in scattergun fashion. Again, I don’t see this as a relevant objection. The initial filter would weed out proposals that were poorly motivated, and applicants could be limited to one proposal per round.
Many respondents expressed concerns about the initial triage: how would the threshold for entry into the pool be set? In practice, it would be feasible to develop transparent criteria for determining which proposals don’t get into the pool. These could include proposals with methodological limitations, which can’t give a coherent answer to the question they pose, ill-formed research questions, or investigations of questions that have already been answered adequately. A blogpost by Paul Glasziou and Iain Chalmers makes a good start in identifying characteristics of research proposals that should not be considered for funding.
There are advantages to the lottery approach that transcend cost savings. Avin’s analysis concludes that reliance on peer review leads to a bias against risk-taking. Leaving the decision entirely to chance, would mean that researchers are not discouraged from submitting novel and creative ideas. Once a proposal is in the pool, there would also be no scope for bias against researchers in terms of gender or race — a particular concern when relying on interviews to assess grants.
Marina Papoutsi, a cognitive neuroscientist at University College London, noted that some institutions evaluate their staff in terms of how much grant income they bring in — a practice that ignores the role of chance in current funding practices. A lottery approach, where the randomness is explicit, would put paid to such practices.
If this got implemented it would also have a positive effect on promotions ... currently based on funding (in top ranking research institutes at least) and perpetuating inequality— Marina Papoutsi (@mp_neuro) April 6, 2018