Trust and Verify
On good judgement and building trust within both the scientific enterprise and the public.
International weapons treaties work from a motto based on an old Russian proverb: trust but verify. Science works from a related point of view: trust and verify. The “and” is important. As scientists, we tend to concentrate on the verification aspect: we like to double- and triple-check our own findings before making them public. But as a scientific community, we move forward based on trust. We accept that knowledge is communal and cumulative, and we depend on the work of our predecessors and contemporaries.
To a great degree we are obliged to trust others. When we perform peer review—the sine qua non of the scientific enterprise—we trust that the data presented is legitimate and that the study proceeded in the way that the scientists who performed it said it did. Scientific methods are a central part of any published paper—the protocol, the population, the analysis—allowing replication by others. Yet we must admit that when we peer review studies, we concentrate on study design and the authors’ data interpretations and have no easy way to judge the authenticity of findings. Groundbreaking work always requires replication, verification.
The trust researchers can place in the scientific literature is the basis of how much trust the public places in science.
Verification comes also through the competition between scientists who provide a check on flawed ideas by double-checking; replication is not only a way to disprove others’ theories but sometimes a way to make a name for oneself. Famous scientists are perhaps believed more easily, but may also be challenged more vigorously. There is also redundancy built into the scientific enterprise because scientists often come up with similar ideas and run similar experiments in the course of moving a field of research forward. The arbiter of validity is verification rather than an elite panel making pronouncements on truth. In general, the best ideas survive.
But mostly, scientists do not replicate the experiments or clinical trials—which require laboratory infrastructure and are expensive in time and resources to perform—they read in journals where peer-review has provided its validation. We rely on the work of our colleagues, and trust work that we haven’t tested ourselves, unless we find what we read incredible. Some findings call out to be verified even beyond peer review, but in nearly all cases we count on the data we find before us, and we advance by trying to extend its implications to another population, another condition. It is easy to miss how much faith this places in the good judgement of the scientific community and of researchers to present their work reliably.
When this faith is undermined—when scientists mishandle data, fudge analyses—when there is willful scientific misconduct, there is appropriate teeth gnashing. And not only by journal editors and the professional watchdogs whose jobs are to protect the scientific record. We should feel personally injured when a colleague acts questionably, misbehaves. Over the past decade, the number of research misconduct allegations reported to the National Institutes of Health has more than doubled. The question of why this uptick is occurring we will save for another day, but it must rest in part on science’s systems of reward and punishment.
The trust researchers can place in the scientific literature is the basis of how much trust the public places in science. What scientists know depends on the clear and thorough and good faith communication of others. Science depends on the wisdom of the global community of scientists and an agreement as to those discoveries that are trustworthy and verifiable, and those that are not. The entire enterprise, in some ways, requires a degree of trust in our collective character. Ensuring this earns the public trust.
Previously in Observing Science: Scientific Siloes