Bias

On the unavoidability of bias in science and the importance of identifying and admitting to our own biases.

"Observing Science" title and mission on dark grey background

Read Time: 4 minutes

Published:

There is a bias against having biases. Our everyday use of the word bias is derogatory, suggesting a belief that is unfounded and unfair; a bias is a prejudice. In biomedical science, by the 1970s, bias had largely become a statistical, clinical trial term with a negative connotation associated with misclassification, error, a flaw in an experimental design or data collection, and lack of validity. The moral association of the term “bias” is still apparent as a disparagement, something to be avoided in our experiments. Pushing against this dominant conception of the notion, we suggest that bias, seen as a part of science that is unavoidable, may be getting a bad rap. Perhaps a more neutral understanding of what bias is could offer some insight into the process of science and a preferable approach to our thinking about scientific work.

Here is a bias: if an intervention for condition X is feasible in a low-resource setting, it is likely to be feasible in most settings. Here is another: if an intervention works well for people who are highly resistant to most other treatments, then it is likely to work for most people who have never been treated before. And another: if the “average” patient in a clinical trial does well, then the intervention is “effective”—even for the neediest patients. And another: it is worse to say things that are not true (Type 1 error) than to miss or ignore important things about the world (Type 2 error), even if there is a lot of potential harm, as for the link between cigarette smoking and cancer.

Learning is sometimes much less free from preconceptions than we realize, and science needs to come clean about this.

Scientists are never bias-free before an experiment begins. They begin their work with some fore-judgment, some assumptions they are making based on our current state of knowledge. Bias then is a word that simply points out that we are guided by prior categories of understanding and an experiment is done to affirm or negate these existing beliefs. Only by identifying and admitting to our biases can we begin to grapple with them. This is similar to acknowledging that our values, which also are necessary to thought and to decision-making, are not something to be embarrassed about.

Known biases can be just fine—we do not try to arrive at the airport just in time for our flight, we go on the early side so as not to miss the plane. It is the unknown or unexamined biases that are problematic. For example, a mental health researcher chooses only to examine symptom reduction while not considering other outcomes like social connection or return to work.

Recognizing biases is particularly important when considering the question of who bears risk. Patients bear the side effects of a new treatment—is the benefit worth the risk? Communities bear the risk when a new screening program goes wrong—too many well people are identified and grow worried unnecessarily. Bias is appropriate with respect to risk—we have a phrase for it: “to err on the side of caution.” Should all new technologies be innocent until proven guilty? A little biased prejudice might serve us well concerning things like new chemicals and applications of AI that science brings into the world.

Bias does not suggest that we are unwilling to accept negative results at the end of an experiment. Bias does not suggest that partiality should ever overcome scholarship. Learning is sometimes much less free from preconceptions than we realize, and science needs to come clean about this.

Previously in Observing Science: A Science of Values