How researchers can improve the quality of systematic reviews

A guideline to boost transparency is being updated.

24 September 2019

Jon Brock


Matthew Page

Matthew Page, a research fellow in the School of Public Health and Preventive Medicine at Monash University in Australia, is interested in the biases that affect research and its reporting.

Through his work with Cochrane, an international non-profit network of healthcare researchers with a focus on evidence-based medicine, he’s investigating the transparency and reproducibility of systematic reviews.

A systematic review uses clearly defined and reproducible methods to identify and synthesize the results of studies on a particular topic, such as the spread of Zika virus.

They are often considered to be the strongest form of scientific evidence because they offer increased statistical power and more precise results, and can resolve conflicting results across studies.

However, in 2016, John Ioannidis, a physician researcher at Stanford University, raised concerns about the overproduction of systematic reviews.

Many, he argued, were of poor quality. It’s often unclear how the authors decided which studies to include, and there’s a lot of redundancy, with multiple reviews covering the same ground, Ioannidis wrote.

Are these concerns valid? And if so, what can be done to improve the credibility of systematic reviews? Page shares his thoughts with Nature Index.

Why are systematic reviews important?

We need systematic reviews because we have a lot of studies looking at the same question and it’s difficult for your average reader, clinician, patient or even researcher to make sense of all of that literature without it being brought together and synthesized into a meaningful whole.

In the past, we had narrative reviews. They were often written by expert opinion leaders, but were done using non-systematic methods, and based on the research that is known to them, as opposed to the full spectrum of research that has been done.

Whenever I’ve been involved in systematic reviews, they’ve always thrown up surprises. You think you know everything that’s out there on a given topic, but I continue to be surprised at how much information there is available .

What’s the difference between a systematic review and a meta-analysis?

The terms meta-analysis and systemic review are often used interchangeably, though this is technically incorrect. A meta-analysis is one of several methods available to mathematically combine the results of studies.

While it’s possible to conduct a systematic review on any topic, it’s not always possible or appropriate to conduct a meta-analysis.

Can you explain the sudden popularity of systematic reviews?

There are some positive reasons. Many clinical practice guidelines rely on the findings of systematic reviews. There’s a push for people to do these reviews so they are contributing to a product that could influence patient care.

But in the eyes of some people, systematic reviews are relatively easy to do because you don’t have to go through all the hoops of getting ethical approval and recruiting patients.

They also tend to have a lot more citations than the individual studies themselves, so journals tend to like publishing them because they can improve their impact factor.

How can researchers improve the quality of their systematic reviews?

PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) is a reporting guideline designed to help authors prepare a transparent account of their systematic review.

If authors follow PRISMA correctly, readers should be able to understand what the authors did and what they found. It’s relatively agnostic with regards to the actual methods used. For example, rather than stipulating that authors must search database X, Y and Z, it merely asks them to declare which database(s) they did search.

PRISMA was disseminated in 2009. My colleagues and I are currently updating it to ensure that authors are reporting their reviews in line with the more recent standards of conduct expected for systematic reviews.

Another initiative is PROSPERO, a database for preregistering systematic reviews. It involves specifying in advance the question you hope to answer, which databases you plan to search, which methods you will use, and which analyses you will perform.

Systematic reviews often consider “risk of bias”. What does that mean?

Risk of bias is a way of saying how much you trust the findings of a study. For various reasons, not all studies are conducted using the most rigorous methods. What we do in a Cochrane Review (a systematic review of primary research in human healthcare and health policy) is essentially warn our readers about whether we think the findings of studies included in the review are trustworthy or not.

For example, if it’s a randomized control trial, we ask if there are adequate methods for randomly allocating patients into groups, and whether the patients and the person assessing the outcomes knew what treatment received.

We ask whether there was drop-out and whether study authors may have cherry-picked the most favourable reports.

With Cochrane Reviews, we tend to include all relevant studies in the review, but we’ll take out studies that were at high risk of bias and see if that affects the results.

Cochrane focuses on medical research. Do you think systematic reviews have a place in other fields of research?

Other areas of science have essentially the same issues that we have had in medicine, so they could certainly adopt these methods.

There are barriers to adoption. For example, if papers in a field are not indexed carefully in bibliographic databases, it can be difficult and time-consuming to identify relevant literature. Also, a lot of studies are reported poorly, which means that systematic reviewing often involves tracking down the missing information.

But as long as the research community disseminates its findings, there is opportunity to synthesize those findings in a systematic review.

Read next:

Frequent collaborators are less likely to produce replicable results

Q&A: Four false beliefs that are thwarting trustworthy research

Elsevier investigates hundreds of peer reviewers for manipulating citations


Research Highlights of partners

Return to 'News'