Q+A

The 10 most common mistakes with statistics, and how to avoid them

Significant results are not the only goal.

  • Gemma Conroy

Credit: z_wei/Getty

The 10 most common mistakes with statistics, and how to avoid them

Significant results are not the only goal.

11 November 2019

Gemma Conroy

z_wei/Getty

Small sample sizes, p-hacking and spurious correlations are notorious in scientific publishing. They’re also easy errors to make when under pressure to publish research results with impact, says neuroscientist Jean-Jacques Orban de Xivry.

“It’s much easier to publish significant results than experiments that don’t work. That’s an unfortunate statement, but it’s the truth.”

Jean-Jacques Orban de Xivry

Orban de Xivry, a professor at the Catholic University of Leuven in Belgium, teamed up with Tamar Makin, a cognitive neuroscientist at University College London, on a feature article in eLife outlining the ten most common statistical mistakes in scientific research. The full list of errors can be viewed here.

Lapses in methodological process, such as the absence of an adequate control group and the use of small sample sizes, is a big one.

Inflating the units of analysis (where different groups, for example, young people and old people, are combined to create a larger but misrepresentative sample size) is another one, and circular analysis – otherwise known as double-dipping – also features.

Orban de Xivry describes circular analysis as “analyzing your data based on what you see in the data.”

“This increases your chances of getting a significant result,” he says.

Nature Index spoke with Orban de Xivry about these errors and the effects they can have on research integrity.

What prompted you to outline these ten statistical mistakes?

My co-author, Tamar Makin, was originally using this list in her lab’s journal club to help researchers determine the reliability of the papers they were reading.

A lot of researchers – myself included – have made these mistakes, and often it’s only with training and time that you realize that you have made an error.

It’s not that we know everything or are perfect. It’s more that we’ve learnt these things over the years and we thought that sharing them could help make research more reliable.

What are some consequences of these errors?

When a paper does not show what the authors think it shows, it means that the paper is unreliable, the results cannot be trusted, and they will not be reproduced in the future. Sometimes this can lead to a whole series of papers that are too good to be true.

It’s a multifaceted problem and it can affect whole lines of research. In psychology, for example, there are so many papers that show effects that don’t really exist. A lot of this is due to publication bias and people doing whatever they can to get significant results.

While people who publish a series of significant results in high-impact journals are seen as very successful, it’s basically impossible to get reliable significant results all the time. It doesn’t work as well as you pretend it does.

How have these errors become so common?

One of the main factors is that we have the wrong incentives in science. We are rewarded for publishing a lot, not for being true. It’s very difficult to publish results that aren’t statistically significant, even if the study is solid. This means the best studies can’t be published in high-impact journals, which are the ones that make people’s careers.

Another thing I’ve realized is that very thorough papers are often seen as ‘unsexy’, as they tend to be very long and include replications of the results. We tend to have this idea that if you want to publish in a high-impact journal, the paper needs to be ‘sexy’. This is crazy to me, because solid science is not sexy.

The last factor is that it’s very difficult in science to tell someone that they have made a mistake. There is an absence of polite discussion and there’s no system that allows us to point out errors in papers. It’s too late once a paper is published and researchers don’t want other people commenting on their work, as it could compromise their reputation.

Some scientists also tend to feel personally attacked if their work is openly criticized, even if there are obvious mistakes.

What can the research community do to prevent these mistakes from happening?

I think one potential solution is making science globally open, and I don’t just mean open access. It’s presenting scientific results as something more than a PDF file on a website. This means showing how the study was done and including all the material, such as your data and analysis code.

It’s very difficult to detect p-hacking or circular analysis if you only have a PDF to review. But if you have the data and analysis scripts, then readers are able to analyze the data and consider whether the results are the most likely outcome with the dataset.

I think if you share all these things, you show that you are ready to accept that there might be mistakes and are willing to improve. It increases confidence in what you have done, because people can freely check. This will have an impact on the reliability of statistical analyses, as mistakes are more likely to be detected.

I think open science will also have a positive influence on the way people perform research and could encourage institutions and funders to change incentives. Being right is better than being productive, and having one good study with all the material open is better than ten studies in PDFs.

What steps can early-career researchers take to improve their statistical analyses?

Adequate training and mentoring is definitely important for junior researchers, but it’s also important for more senior researchers. The problem is, how do you push people to train again in something that they think they already know?

Being active on social media and blogs has played a huge role in my own statistical training. Twitter is a great way to train yourself, as there are so many people commenting on statistics, sharing information and pointing out mistakes. It’s very easily accessible and it’s not just aimed at an expert audience.