Q&A: 5 simple ways to make your research more reproducible
Small details can make a big difference.
31 October 2019
When David Sholl and his colleagues began investigating the reliability of findings in the chemical sciences, he noticed that something was missing.
“Our analysis strongly suggested that people are doing replicated experiments, but were not reporting them in their papers,” says Sholl, a chemical engineer at the Georgia Institute of Technology.
Reporting replicated experiments is an important part of keeping the scientific community informed on the most robust methods, but the pressure to publish novel findings can often deter scientists from doing so, says Sholl.
While efforts to improve reproducibility and the reporting of such experiments have largely focussed on biomedical sciences and psychology, Sholl says many of the recommendations from those fields, such as preregistering studies, are not appropriate for chemistry and materials science.
To address this gap, Sholl published an editorial in the journal Langmuir outlining five steps that researchers in the chemical sciences can take to make their research more reproducible.
Nature Index spoke to Sholl about how researchers can make their studies more rigorous.
What are your five steps for ensuring reproducible research?
David Sholl: Step 1. Show error bars on your data
Use error bars as a way of talking about the uncertainty or number of repeats you did in the experiment. That’s a fairly basic part of doing science, but we all forget to do this from time to time.
Step 2. Tabulate your data in the supporting information
Include the data for any figures in the paper in the supplementary information. This is helpful for researchers who want to compare their data to prior studies.
Step 3. Show data from calibration/validation tests using standard materials
It’s important to report any calibration or validation tests using standard materials [which contain precise concentrations of a substance] as this enables researchers to make a connection to prior literature. It’s surprising how often a group of researchers have actually established a standard material, and this type of replication can have a large influence on our research community. This is something that people who are relatively early in their career can do to have a long-term impact.
Step 4. Share input files and version information
Think about how to make your models accessible to others. While many of us use proprietary software that can’t be shared, input files and version information can be included in the supplementary information. This makes it easy for other people to do the calculations, and it speeds up the process a lot.
Step 5. Report observational details of material synthesis and treatment
Including some photographs of the experiment setup or footage of some of the repeats you did can make a big difference. When I have given talks about this, it’s amazing how often people tell me stories about trying to repeat an experiment and not realizing that a small detail, such as using a glass reactor, really mattered.
People don’t tend to report these kinds of details because they don’t know if it’s important. But having those visual red bullets can help with some of those issues.
What problems are your five steps trying to address?
DS: There are various recommendations to improve reproducibility that don’t translate well to materials chemistry. Recommendations like increasing sample size and preregistering hypotheses make total sense in clinical trials, but it’s just not the way people do things in materials chemistry.
The challenge is that it’s easy to point the finger at others and say they’re not doing it right. This led me to consider the positive ways that we can encourage people to make their work better. I wouldn’t say there’s a crisis, but we all need to be more vigilant about this.
How do you tackle these issues in your own research?
DS: We do a lot of computational modelling and these issues certainly show up in our research. We might do some work and then a couple of years later someone else tries to do it, and it might not work in the same way anymore. Particularly with computation, these things should be easy to solve, at least in principle. You should have access to the same input files, the same code, and so on.
We have started being more thoughtful and systematic about making sure we have those files and that we make them available as part of our manuscripts. We’re not perfect, but we are trying to do better.
Recently, my collaborators and I submitted a manuscript on a new material created by a postdoc. The postdoc gave her methods and the chemicals she used to a grad student in one of the collaborator’s groups. The grad student went ahead and made the material, which gave us a huge amount of confidence that we had described things correctly and our synthesis was robust. I think if people are willing to do that, it really lends some weight to your manuscript.
What barriers do researchers face when trying to make their research more rigorous?
DB: The whole machinery of the scientific community values novelty over everything else. If, for instance, you submit a manuscript that’s repeating someone else’s work, you’ll likely have a reviewer tell you that it isn’t terribly interesting.
It’s also important to realize that repeating someone else’s experiments does take real resources. If you are spending time reproducing previous research, then you are not spending time on other things, so there is a trade-off.
What’s the most important thing researchers should remember when considering reproducibility?
DS: We should always remember to do what we wish others would do. I think there’s a burden on university faculty and others who train researchers to make this part of our conversation and to make it part of the training we do with students.
A positive change is that, when people are already doing replicates, they should look for ways to include that information in their publications.
The interview has been edited for clarity and length.