Censoring Ourselves

On the risk of self-censorship and its threat to the foundation of science.

"Observing Science" title and mission on dark grey background

Read Time: 5 minutes

Published:

One of the central tenets of science is that its work be allowed to follow the data, to publish the facts wherever they lead us, to the end of advancing knowledge and understanding. Scientists bridle, correctly, at any efforts aimed at stifling or censoring science. For example, recent efforts to censor science around climate change have been met with widespread opprobrium in the scientific community. Similarly, a few years ago, there was substantial pushback against efforts by federal funding agencies to align with a more conservative agenda that was being promoted by the then-president. In the main, countries such as the U.S. with a robust tradition of research have been able to maintain the progress of science, pushing back occasional politically motivated efforts to impose ideological agendas that censor the work of science.

But what happens when science starts censoring itself? It is no secret that science, the bulk of which happens in universities, is predominantly being carried out by scientists who have a particular ideological bias themselves. Fewer than 10% of U.S. academics, in one study, identified as being on the “right” of the political spectrum. In and of itself, there is nothing wrong with scientists having perspectives on the world around them, nor that those may cluster similarly, as do perspectives in all workplaces. But in the context of science, where the purpose is to dispassionately evaluate data, does such homogeneity of perspective affect the work of science? 

A recent study published in the Proceedings of the National Academy of Science argued that there is indeed self-censoring happening in science, driven largely by “growing censoriousness,” with scientists being worried about publishing science that deviates from accepted norms and beliefs within the scientific community. The large group of authors behind this work do note some disagreement about the extent of the problem, but point out that there is a need for more data on scientific censorship and a greater and more honest engagement with the issue. 

[S]elf-censorship threatens the very foundations of what science aims to do.

What are the forms that self-censoring might take? Scientists could mask disagreements with colleagues, tucking away pieces of evidence or data, hesitating to publish or voice what could be perceived as challenges to senior scientists or prevailing theories that could damage one’s reputation. In addition, some scientists might be reluctant to publish or discuss research that they believe could confuse the public and complicate a simple argument, or undermine confidence in public policy recommendations.

In some respects, it is not at all surprising that scientists would “self-censor” within the parameters of what is seen as acceptable in the public conversation. The past decade in the U.S. has been characterized by so-called “culture wars” with ideological arguments shaping presidential contests and having a substantial impact on public policymaking. In a sense, market demands promote self-censoring in the U.S.; in other countries, authoritarianism and fear of direct retribution silences dissent and blocks openness. The challenge, however, is that such self-censorship poses real problems for science. 

First, self-censorship drives science to homogeneity of questions asked and answers that we dare to identify. Such homogeneity is directly at odds with other efforts that aim to ensure greater diversity of identity and perspectives of those who are involved in science. These efforts, which have accelerated over the past decade, have aimed to a ensure greater breadth of thinking and to avoid the biases that emerge from too-narrow perspectives. A science that practices its craft within narrow confines of what is considered acceptable undermines this very effort at diversifying who does science, and how that science is done.

Second, pre-conceived notions of what science is “acceptable” rapidly run the risk of pushing us to adopt particular ideological beliefs that, by their very existence, threaten the credibility—and influence—of science. This is particularly problematic in light of the growing lack of confidence in science, as amply demonstrated by a number of recent national polls. It will be very difficult to advance evidence-based answers based in science if science is viewed as tilting the evidence playing field by censoring the questions it asks or the answers it offers. 

And, third, self-censorship threatens the very foundations of what science aims to do, that is, to observe the natural world dispassionately, to draw inferences from experiments, and to understand how the world works. Such observation simply is incompatible with a partial lens that views only half the world, ignoring other parts that do not align with our beliefs. These challenges understood, the harder question becomes: how do we solve self-censorship in science? There is clearly no one answer to this, but a first step may well be acknowledging the risk of self-censorship, raising awareness of the problem, and challenging scientists, leaders in science, and science administrators to push us all out of the ideological narrows where we may inadvertently find ourselves. An active conversation in science about these risks seems to be in order.

Previously in Observing Science: Science as Art?