Out of date before it's published
Zika provides a vision for how to keep pace with a fast-moving research field.
30 July 2019
Nature Index 360°
In 2015, an outbreak of the then little-known Zika virus in Brazil was accompanied by a sudden spike in neurological conditions. Most striking were reports of Zika-infected mothers giving birth to babies with microcephaly – unusually small heads and underdeveloped brains.
A surge in research on the virus swiftly followed, with over 700 articles recorded on the PubMed database between the beginning of 2015 and May 2016.
To make sense of this research deluge, the World Health Organization recruited Nicola Low, an epidemiologist at the University of Bern in Switzerland, to lead a systematic review of the Zika evidence.
The critical feature of a systematic review, which must occur before they've even started, Low explains, is the authors must define a protocol that states how they will identify, select and synthesize the evidence around a clearly formulated research question. This reduces bias and allows the review to be reproduced.
Low and her team concluded that Zika infection during pregnancy was the most likely cause of microcephaly. However, by the time their review was published in January 2017, it was already out of date. In the eight months it had taken to synthesise the evidence and pass peer review, another 1,400 new Zika-related papers had been added to the scientific literature.
The team set to work again and just over a year later published a second Zika systematic review. Knowing that this would also be immediately obsolete, they announced that they were transitioning to a "living systematic review".
Instead of starting afresh for the next update, they would use their review protocol to provide regular updates that incorporate the very latest evidence.
"We’ve got automated searches that come from the major databases and epidemiological surveillance websites," Low explains. "We’ve got a filter that picks out the papers that are relevant and then we have people who are checking those every day as they come in."
Research outpaces understanding
The unusually fast-moving Zika research field illustrates a broader challenge facing medical and scientific research.
"We don’t have the tooling or the systems to create reliable evidence in a way that can keep up with the deluge of research,” says infectious disease researcher Julian Elliott who, with colleagues, first proposed the idea of living systematic reviews in a 2014 paper in PLoS Medicine.
The inspiration, he says, came from his experiences developing clinical guidelines and policies for Cambodia’s fledgling HIV treatment program in the early 2000s. "It was just very difficult to get our hands on reliable, up-to-date evidence to inform those decisions," he says.
Elliott, now a senior research fellow at Monash University in Melbourne, Australia, is co-leader of Project Transform, a Cochrane Collaboration initiative leveraging new technologies to increase the speed and efficiency of systematic review production.
Since its formation in the early 1990s, Cochrane has pioneered systematic reviews for medical research, publishing over 7,500 to date. Cochrane policy is that reviews should be revisited within two years, but even this may not be frequent enough, Elliott argues.
A 2007 study following up 100 systematic reviews (including 27 Cochrane Reviews) found that in 15 cases, new evidence warranting a “substantial change” in the conclusions had emerged within 12 months.
Accelerating the process of updating systematic reviews requires a modernisation of researcher workflows, argues Chris Mavergames, Cochrane’s Head of Informatics & Knowledge Management. "We need to agree to new methods," he says. "And we need to leverage tech a lot better."
Central to Cochrane’s technological revolution is a semi-automated ‘evidence pipeline’ that combines machine learning with a crowd-sourcing platform.
Volunteers (often individuals affected by a condition that Cochrane is investigating) can contribute towards systematic review production by completing simple tasks such as identifying the results table in a PDF of an article. "For some tasks," Mavergames explains, "the machine almost never makes mistakes. But then it can't do very simple things that humans can do easily."
The first successful implementation of this hybrid approach involves screening the research literature for randomised control trials that can be added to Cochrane’s database. But Mavergames says this is just the beginning.
"Where we’d really like to get to is to identify the trials, grab the paper, find the data, extract key characteristics like the size of the trial, and then automatically check to see if it could possibly influence the result of an existing systematic review.”
The final step in the updating process is to assess the quality of the new evidence. In a Cochrane Review, the authors flag studies where there is a ‘risk of bias’ such as inadequate blinding, under-reporting of outcomes, or potential conflicts of interest.
Currently, this assessment is performed manually with two experts reading each paper included in the review. However, a team led by Byron Wallace at North Eastern University in Boston, Massachusetts is currently training a machine learning algorithm, dubbed RobotReviewer, to assess the quality of randomised control trials.
A demo, now live on the team’s website, allows users to drag and drop a PDF and within a few seconds receive a report with key information from the paper and a preliminary risk of bias rating. It’s not ready for use in the production of systematic reviews, but Mavergames is excited by its potential. "If it gets good enough that we can let a machine do one review and a human do the other, that's huge," he says. "It’s a 50% efficiency saving."
For now at least, undertaking a living systematic review is a serious commitment of time and human resources. Elliott’s team have identified three main criteria for deciding which research questions to prioritize: the topic should be important to stakeholders, there should be some uncertainty in the existing evidence, and it works best in topics which have an active research field.
To date, the Cochrane Database of Systematic Reviews has published five living systematic reviews covering topics from anticoagulants for cancer patients to interventions aimed at increasing children’s intake of fruit and vegetables.
Meanwhile, the Journal of Neurotrauma has also published a collection of five living systematic reviews focused on brain injury. Nine other journals, including Faculty of 1000, PLoS One, and BMJ Open have also published at least one living systematic review.
However, as Elliott acknowledges, there are technical, organizational and cultural barriers to more widespread adoption of this publishing model. Publishers will need to invest in infrastructure that allows tracking of the different versions of papers.
Editors will have to decide when an update requires peer review. Team leaders will have to manage an evolving roster of review authors and ensure that due credit is given.
And in a world where the productivity of researchers is measured by the number of publications, hiring committees and grant review panels will have to determine how to evaluate CVs that include multiple versions of the same publication.
But, for Elliott, the more important challenge lies in translating living systematic reviews into real world outcomes so that guidelines, policies, and decision making are informed by the latest and the best available evidence.
"Ultimately," he says, "the value proposition of research is that it improves society. But the only way we can do that and get a proper return on society’s investment in research is by translating it into real benefits through action."