Science, Fast and Slow

From CRISPR to AI, new technologies progress quickly. The regulation of science needs to be taken seriously from the very start.

"Observing Science" title and tagline on dark grey background

Read Time: 4 minutes


Artificial Intelligence (AI) and CRISPR, two technologies developed over the last several decades, could change how we live and work for decades to come. Both entered wider public awareness only in the past year with readily usable products, namely ChatGPT and an FDA-approved treatment for sickle cell anemia developed with gene editing. These innovations have been met with a mix of enthusiasm, and caution. There is no argument that CRISPR helping treat sickle cell anemia is wonderful. But could CRISPR be used to manipulate genetic code in harmful ways? Likewise, AI could have enormously positive impacts on health care. But could it also have catastrophic effects?

AI’s penetration into our daily lives is moving fast. Here, just a few months into 2024, AI’s ability to absorb and process enormous amounts of digital data enables these learning computer programs to answer written questions in full, deeply-sourced (though sometimes not fully accurate) paragraphs, upending businesses requiring organized writing and teaching, and predictions, from weather forecasting to cancer diagnoses. AI’s ability to learn computer languages has accelerated the work of software designers whose programs are now written and debugged faster than ever. There is more disruption that will quickly arrive as AI improves image and speech recognition and gets embedded in all our daily digital touches, emails, search engines.

AI will soon be our front door to the internet. We will upload our own data, and AI will summarize it, rework it. Each of us will have a personal assistant. Yet, oddly, no one actually knows exactly how AI works—for example, why the same question asked twice produces slightly different AI answers. This lack of mechanistic insight into something we’ve developed is unusual in the annals of science, and worrisome, given AI’s ineluctable capacity to learn and reason and decide.

CRISPR, a biological tool, can also be “programmed,” in this case, to cut DNA in particular places, removing defective genes or fixing or activating other genes. It holds the promise of curing not only sickle cell anemia, but many other genetic diseases, including cystic fibrosis and muscular dystrophy, and perhaps reshaping our immune systems. CRISPR’s mechanism of action is understood, but its full consequences are not. Manipulating one gene may cause unintended damage in another; when immune cells are edited, cancer defenses may be lost. And there are ethical concerns, from creating novel incurable disease to new forms of biological warfare, altering not only bacteria, but also insects and animals.

CRISPR, as the older of these two technologies, has shown the way for how we can act to harness new technology to preserve its potential, and to mitigate its harm although regulatory and ethical safeguards need regular re-examination here and in other countries. The patent holders of certain types of CRISPR research have imposed moratoria on certain lines of research, trying to be thoughtful about how to proceed safely. There have been calls for similar guardrails with AI, notably from some of the originators of AI. Indeed, there are already Congressional hearings on AI’s reach. The coming years will undoubtedly hold much more conversation about how to create the right guardrails on AI.

New technologies progress quickly, driven by intellectual competition between scientists and the draw of money to be made. The question of which comes first to mind, money or safety, looms. What history teaches is that the regulation of science needs to be taken seriously from the start of any new technology. There are always mistakes and surprises. There are scientists’ promises, but there must also be societal permissions. Accountability needs to be transparent.

Do we need an FDA for AI products like we have for medications and CRISPR products that we will use more and more? Can regulations be forcefully implemented? As scientists we first ask, can we go faster? As citizens we need to ask, can we go slow enough to take care of ourselves?

Previous issue: Audiences