Non-scientific ways to approach the “reproducibility crisis”

Reproducibility is in crisis; or at least that is how 90% of more than 1500 researchers responded to a Nature survey conducted a few months ago (1). The statistics out of the survey are shocking. 70% of researchers have failed to reproduce another scientist’s experiments. The Nature editorial and the following special articles about reproducibility have brought back an old concern - can we trust scientific publications? The same concern was discussed in a workshop two years ago, held jointly by the NIH, Nature Publishing Group, and Science (2). Although the workshop triggered novel guidelines in grants and publications, the results from the Nature survey reveals that there is still “space for improvement.”

Last year, Science, published an article about reproducibility in psychological sciences (3). The “Reproducibility Project” recruited hundreds of scientists to perform 100 replication studies, and the results were shocking - fewer than half were repeated successfully. This raises more concerns, if two studies reach different conclusions, how can you tell which one is right? As an exacerbating factor, just imagine that one of the studies was performed by a big and well-recognized lab, and the other was performed by a smaller group; further, consider the potential outcome if that smaller lab is trying to publish it after the more well-known group- they are likely to face an uphill battle or end up publishing in a more obscure journal.

Reproducibility is essential to building scientific knowledge. My former mentor, Dr. Mayani (Oncology Research Unit, IMSS, Mexico City) always uses the phrase “on the shoulder of titans” (referring to the story of the Gemini, a NASA project before Apollo) as an analogy for how any new discovery is supported by an old one (the same analogy was also used by Iorns and Chong in reference 4). So every paper that can’t be reproduced affects other papers, causing a so called “domino effect.”

The scientific community is addressing the reproducibility crisis in many ways. PLOS One, launched in 2012, has a Reproducibility Initiative (5), which aims to provide a platform to publish validations studies. In 2013, Nature announced (6) new initiatives, most of which were focused on eliminating the length restrictions on the methods sections, encouraging authors to publish supporting data and share detailed methods. The NIH has revised grant applications and published a webpage focused on reproducibly (7), encouraging rigorous statistical analysis, data sharing, transparency in reporting, and sharing best practice guidelines. However, these new policies are not enough to encourage authors to publish validation studies, if they are not supported by grants. Fortunately, new grants are emerging that will focus on the specific objective of validating previous reports. For example, this year the Netherlands Organization for Scientific Research (NWO), launched a fund dedicated to replication studies with over $3.3 million USD for 3 years, but similar mechanisms are needed elsewhere if we have a serious commitment to addressing the issue.

Are the initiatives from the journals and funding agencies going to be enough to overcome the reproducibility crisis? In my opinion, they will reduce it, but they will not eliminate the problem. I think that there is a risk that most of the initiatives will target only novel discoveries or new technology. If we consider the possibility that most of what is published can’t be replicated in full, it will likely be impossible to secure sufficient funds to repeat even a small fraction of the current body of research. This is especially challenging given that we are dealing with a related “grant crisis”, limiting the number of studies funded due to worldwide budget cuts to research. 

I think the reproducibility crisis requires us all to re-think how we perform and publish science. As a brainstorming exercise, which will hopefully inspire others to think about this topic beyond reading this post, I have found inspiration toward ways to improve our scientific practice from outside of the field and shared them below.

From cocktails to the bench

Before attending our annual meeting in Kyoto last year, I had the opportunity to visit Tokyo. I was amazed by the city, but mostly by their people. One who stands out was Hidetsugo Ueno, a mixologist and owner of the High Five bar. He is well-known because he has no menu at the bar, and he customizes each drink to each client. But this is not the reason I was inspired by Ueno. He is also famous for his ice diamond carving technique. I have read about him, but I have to say that watching him preparing a drink was outstanding. Every step was performed with perfection - the way he pours the drink, shakes it, and places the ice into the glass. Every detail was considered - the quality of their products and the level of attention he puts into every step, repeated carefully over and over. You could see how proud he is of what he is doing and that he is driven by his passion. He made me think of a “simple” drink as a piece of art. And he was not the only one “mixing” at the bar; the same experience could be derived from any of his personnel, which is likely the result of his good training. The lesson from Ueno about reproducibility was to pay attention to the quality of all aspects of your activity, from the “reagents” to the execution of the “methods” and most importantly the training all your personnel, so that you get the same reliable and high quality results from any of your team members.

Medals come from good execution of the fundamentals as much as novelty

I was unexpectedly inspired by gymnastics as I watched the recent Olympic games with my sister, who is a former gymnast. She explained to me that while every routine is made up of “free” exercises, which are new and chosen by the gymnast,  each also has obligatory exercises, a set of movements that must be integrated to ensure a good score. So I thought about having “obligatory experiments” for each paper, a solid foundation which consists of the repetition of former experiments (to validate them and encourage reproducibility) together with the novel assays (the “free exercises”). Given our focus on “new” ideas, I think might be better to try to establish a precedence to include a proportion of replication experiments on each paper, instead of having a 100% validation paper. Perhaps we can offer to include (and tolerate as a reviewer) a fair percentage (like 10-20%) of repetition of key experiments and or validation of reagents that support our current observations as a “requirement” for any publication. This would extend and better illustrate the general principles of the approach then is currently included in most “methods” sections, providing enough information so the reader does not have to look up all the prior data and/or methods to understand (or believe) the experiments shown. Obviously, this would require a commitment not only at the level of the researcher and manuscript reviewer, but the editors, funding sources and even, when applicable, animal oversight committees. Whether it is a total validation study or a 10-20% validation paper, I believe it is essential that any validation attempt and results are published as part of the full report of each new scientific publication.  Related, together with most of my colleagues, I think we need to discourage the use of blogs or social media to publish validation results, which can discredit our colleagues and  lack scientific rigor, including peer review and a clear mechanism for response from the original authors.

The Devil’s advocate and science’s “worst sin”

I think I can use an Al Pacino quotation for any example in life or science, but for this blog I like his quote on his role on the movie: “Vanity, My Favorite Sin”. Scientists are under a lot of pressure not just to publish, but to publish novel findings. There is a time limit for grants, graduate theses, postdoctoral stays, which can contribute to the growing phenomenon of “rushing science”. Related, publishing what is perceived as a truly novel story often leads to recognition by ones peers, including further speaking opportunities, additional funding and even extra institutional support. As scientists, and especially for new investigators, we need to build our careers with firm steps, with studies that can be not only validated, but that are solid foundational knowledge “stones”. We must fight back the seduction of publishing a preliminary, but novel finding, and the fear of someone else publishing it first. We should avoid sending out a manuscript before is truly ready, with similar results derived from a variety of assays and with all the appropriate controls and alternative explanations considered. A solid story will presumably have a longer life then a quick flash, later proven to be not quite right.

The reproducibility crisis is dangerous. It can send the wrong message to the public, with both bad and good science losing credibility, but most importantly:  this crisis is dangerous to scientists. We count on each one of us to build our knowledge further. In science we are never alone, even if we have never met the author of a paper, we owe him/her for the previous contribution and for being the “giant” from which we based our hypothesis. Scientists have a very important job that we have to perform with excellence and passion. Science has no space for mediocrity. Findings should be judged by the rigor and quality of the experiments behind them, and not only on novelty. Scientists are always pushed to find novel and useful knowledge, which may or may not be equal to hard work and excellence. To me, the current model in science is analogous to paying a traffic policeman by the number (or amount) of tickets written, and not because he/she performs their job with ethics and quality. This crisis will end when employers, funding bodies, and publishing houses recognize scientists for the way they conduct science, including long-term impact and use by others as a foundation for new discovery, not for the short-term surprise of their findings.

Acknowledgments

I want to thank Trista E. North (BIDMC Harvard Medical School) from the ISEH Publications Committee for all her help editing this blog post.  I would also like to thank my former mentor, Dr. Hector Mayani, for all the lessons I have learned from him.

Citations:
  1. Baker Monya. Is there a reproducibility crisis?. Nature. 2016;533:452-454.
  2. Open Science Collaboration. Estimating the reproducibility of psychological science. Science.205:349(6251).
  3. Iorns E and Chong C. New forms of checks and balances are needed to improve research integrity. F1000Res. 2014 May 28;3:119. doi: 10.12688/f1000research.3714.1. eCollection 2014.
  4. Announcement: Reducing our irreproducibility. Nature. 2013;496(7446):398 10.1038/496398a
 
 
Eugenia (Kena) Flores-Figueroa, PhD
ISEH Publications Committee Member
 
Oncological Research Unit at the
Mexican Institute of Social Health (IMSS)
Mexico City, Mexico

Comments

Popular posts from this blog

Transition from academia to industry: An interview with Elizabeth Paik

ISEH 2024 Society Award Winners

Lab Spotlight: Vanuytsel Lab