Null (And Void?)
On null research findings and how, even though they may not be as attention-grabbing as their positive counterparts, they are still useful.

Read Time: 5 minutes
Published:
Scientists like good news: positive and significant findings impress the public and funders of the work. As a corollary, science tends to downplay studies with null or negative results — those that fail to confirm a preconceived hypothesis (usually in the form of being unable to find an expected relationship between variables or groups) or that demonstrate a new medication or behavioral intervention does not improve health. Negative or null findings are far less likely to be published than positive results. Unfortunately, this publication bias can mislead other researchers or the public.
Put this into the context of how scientists think of their work: how might one interpret a null finding, the failure in finding a predicted difference between two treatments or proposed causes? There are four possibilities. First, the researchers did not find what they and many others thought they had good reason to find. Second, the researchers did not find what only they thought they had good reason to find. Third, the researchers did not find what they only had a hunch they would find. Or, fourth, the researchers did not find what they had no good reason to expect but hoped they would anyway.
We make these distinctions because they can teach us a lot about science, how science is practiced, and how it can be better.
Publications containing negative results can be as important as, and of equal quality to, their positive result counterparts.
Possibility 1 is fundamental to science. As a physicist working on a particle accelerator recently said, “Physicists get really excited when theory and experiment do not agree with each other. That’s when we really can learn something new.”
Possibility 2 says something about social epistemology—how we agree. A good question might be whether the investigator has changed her mind and now agrees with her peers that there was no good reason to expect what she was looking for. The convincingness of the null result in this case really depends on how well the research or study was conducted. Negative results do not imply a study had flaws in its design or analysis.
Possibility 3 is about exploratory research, scientists pursuing intuition. Disappointed that the perception was incorrect in the completed study, scientists must convince themselves that this line of work really should not be pursued further. Pharma companies, for instance, work rigorously trying to assure themselves of their null results; they don’t want to miss a hard-to-find gem of an outcome (such as that a new medication works!). But from Pharma’s point of view, the only thing worse than a null study is an unnecessary (and costly) replication of a null study.
Possibility 4, hoping for positive findings, says a lot more about the practice of science and how it can be improved. Because scientists are preternaturally optimistic about their work, they sometimes wrongly interpret results positively when they are, in fact, negative (use of statistical tricks, the inclusion of only part of the data, or changing hypotheses post-hoc, can also produce such false positive interpretations). In the case of medication efficacy studies, this creates an inflated expectation of a drug’s impact. However, the scientific cultural attitude is that only positive studies are “successful.” Thus, negative studies, or null studies, too often end up in the “file drawer,” and a line of research that likely should be abandoned continues unabated.
While scientists are willing to publish the null results they produce, only a small number are able to do so. Some evidence suggests that the problem is getting worse, with fewer negative results seeing the light of day over time. This publication bias is sometimes called the “file drawer problem,” and the non-publication of methodologically well-done studies wastes time and money. Other researchers might duplicate studies that have already been conducted where hypotheses have been reliably disproven.
Investigators conducting clinical trials in the United States are mandated by law to report their results. But where to publish? There is a move to reduce selective reporting, to publish more negative findings. These changes have started to materialize as publishers adopt new manuscript formats and launch journals dedicated to null results. Many journals now encourage teams to submit plans and protocols for experiments before conducting them (i.e., pre-registration) so the journals can review the proposals and commit to publishing the results, whatever the outcome.
Of note, the growing use of machine-learning tools in many fields that do predictive modeling can only make use of published work. Scientists have found that the absence of negative data in the literature is hampering the artificial intelligence (AI) process and, again, could lead in misleading directions. AI models will be limited if the data do not reflect complete knowledge.
Publications containing negative results can be as important as, and of equal quality to, their positive result counterparts. They may not be as attention-grabbing, but if data produced in such reports are rigorously collected, they are always useful.
Previously in Observing Science: Women in Science