Can Science Be Scaled?

On the challenge of scaling up science, even when the intentions are good.

"Observing Science" title and mission on dark grey background

Read Time: 4 minutes

Published:

Randomized trials originated in agricultural research in the 1920s where the field conditions of experiments with seeds were nearly perfectly controllable. Trials soon moved to humans, and studies to lower high blood pressure and other conditions using medication have been successfully completed and interpreted thousands of times; randomization of individual patients to individual pills is also reasonably controllable. But there are scientific experiments that are less manageable, more subject to forces that are largely unpredictable. For example, researchers recently set out to study ways to reduce opioid overdoses deaths in towns across America and used a randomized trial design—the scientific method that has historically brought us our most reliable evidence—but applied here to a knottier problem. These researchers entered the messiness of the complex system non-scientists call “the real world.” Such a study raises the question: is it possible to impose city- or community-wide interventions that change health behaviors and outcomes?

The researchers chose 67 towns deeply affected by opioids across four states. Half of these towns were randomized to receive financial resources and technical assistance to implement and augment life-saving practices that had been used successfully in prior randomized trials to save individuals—use of the overdose reversal medication naloxone and greater deployment of addiction treatment medications. The challenge the scientists took up was: when we know something works for an individual can we scale it up to change a town? The research team offered a broad array of practices they knew had worked in smaller and simpler prior studies. But now, using staff, marketing, and clinical expertise, would a bundle of such approaches work across dozens of towns?

When the work of science meets the messiness of the world, resistance naturally arises.

A town is a system, and what we know from systems science is that when trying to change the outcomes of a town, any intervention or health policy being rolled out will inevitably be embedded in intricate and longstanding networks of technical, economic, social, political, and other relationships. Research decisions to reduce overdose inevitably would affect dozens of stakeholders, the public at large, and persons who use drugs and who live atomized and often hidden lives across large geographic areas. An effective set of interventions for overdoses would require changes in the beliefs and behaviors of a large population that was dynamic, evolving, and interconnected. Any change would need to be driven by complementary changes in education, incentives, and institutions.

When the work of science meets the messiness of the world, resistance naturally arises. Improvement initiatives never get off the ground; there are layoffs at community agencies, and the morale of research staff wanes; new services rushed into the field before all the kinks are worked out turn off users, and unfavorable word of mouth ensues. Scientific decisions create a range of feedback loops, most of which are slow, delaying the accumulation of evidence of what might be working in any given town. All of these concerns have arisen during the overdose trial. Scaling up is never easy; implementation of any intervention is often at scale imperfect, because the outcomes are also determined by hard-to-anticipate community resources, local skills and knowledge, community norms, and other forms of human, social, and political capital.

The researchers and the readers of this research may never know the full consequences of how the study’s three-year actions played out in these diverse communities. Follow-up studies must be carried out over the years to come, while at the same time knowing that changing conditions may render the results irrelevant.

The researchers began the trial with eyes wide open: with the understanding that reductionist, oversimplified programs are never sufficient to answer our most complex societal questions. Yet in this case, the randomized trial design that we have grown to depend on to instruct us about causality, to provide unimpeachable evidence and “best” answers to our real world question, may not produce a lot of clarity. Our best learning requires an unswerving commitment to the rigorous application of the scientific method. Should the researchers and their funders have known in advance that such an ambitious community-based study was going to have trouble implementing a randomized trial design? Were its limitations so clear, we shouldn’t have proceeded? Should we have devoted these study resources elsewhere?

Science remains rightly desperate for policies and practices that reduce overdose; the leaders of this study had good intentions. Science, in this case seeking behavior change at the scale of a city, is hard to do.

Previously in Observing Science: Evaluating Scientists