What To Do When Science Gets It Wrong

On good faith science and the importance of understanding the context in which scientific work is accomplished.

"Observing Science" title and mission on dark grey background

Read Time: 4 minutes


In February 1953, one of the world’s pre-eminent scientists, Linus Pauling (who went on to win two Nobel Prizes) published, with Robert Corey, a paper in the Proceedings of the National Academy of Sciences called “A proposed structure for nucleic acids,” suggesting a triple helix as the foundation for what we now call DNA. He was, of course, wrong. 

In April of that year, James Watson and Francis Crick published their paper in Nature, “Molecular Structure for Nucleic Acids: A structure for Deoxyribose Nucleic Acid,” that proposed, correctly, the double helix, and which ushered in the era of genetics that stays with us to this day and won Watson and Crick their Nobel Prize. The Pauling-Corey paper remains available through paper repositories and has been cited hundreds of times. We know this paper is wrong, but it remains in the literature. Should it have been retracted?

This question has been posed before, not least by Retraction Watch, one of the oldest and more pre-eminent watchdogs of science who has led the charge to correct science that is wrong or fraudulent. The question may matter now more than ever as the world has seen an increase in the number of papers retracted in the past decade, and an increase in watchdog groups aiming to correct the scientific record.

It seems to us that missing from the discussion about science getting things wrong is just this—a recognition that science does not happen in a proverbial vacuum.

The majority of retractions are due to fraud or scientific misconduct, which seems to us easy to adjudicate. Data and subsequent papers that are based on intentional attempts to deceive have no place in the scientific literature. But, what about science conducted in good faith, which comes up with the wrong answer, in the case of the Pauling-Corey model, due to not accounting for all of the data observed, and, in particular, not accommodating for data collected by others? What do we do when we realize that our idea about a triple helix, published, is wrong?  

We suggest that science getting it wrong is an important part of science. There are fields where getting it wrong is more prevalent than others. For example, a now highly cited paper from 2005, “Why most published research findings are false” led to extensive discussion and more research to better understand the “replication crisis” in behavioral science. This is all as it should be. We learn from what we are doing, how we are doing it, and what others say about our work, so that we get it better over time. And getting it wrong, coming to the wrong conclusion using the data available at hand—due, for example, to misinterpretation or incomplete theorizing—does not necessarily mean that there is limited value to the work that was done. After all, our insights are always a product of the time and moment when some particular science was done, and we draw conclusions based on what we know at the moment, or what we think we know.

It seems to us that missing from the discussion about science getting things wrong is just this—a recognition that science does not happen in a proverbial vacuum. We consider the data through particular analytical and cognitive lenses. And we bring biases to the questions we ask, in the approaches we take, and in the conclusions we draw.

The conclusions of science, therefore, cannot be considered without understanding the context of the work, and insofar as any paper gets it wrong (assuming it was carried out honestly, with good intent, with no fraud or malfeasance) it is still a record of the scientific process at a point in time, of data collected, analysis done, conclusions drawn. If any of that we later realize proves incorrect, it points to how we should improve the relevant elements in the stages of scientific production so that we can do better science.