Bryn Williams-Jones argues that focusing on researchers’ intentions misses how risks emerge and responsibility is distributed across modern research systems.
__________________________________________
Bioethical debates about artificial intelligence — particularly in relation to health data or biological research — often focus on a familiar question: what were researchers trying to do? This framing centres ethical attention on intent, capturing only a narrow part of where responsibility lies and overlooking how risks and harms arise as research circulates and is repurposed.
Modern research produces extraordinary benefits — new therapies, tools, and forms of knowledge. Yet it also expands the capacity for harm in ways that are diffuse or unintended. During the COVID-19 pandemic, for example, digital contact tracing tools were developed to support public health, but they also raised concerns about long-term surveillance and secondary uses of personal data. Behavioural research can support public health campaigns but also be used to manipulate public opinion. Technologies designed to improve workplace efficiency can be repurposed to exert control over employee behaviour.
This is what the concept of “dual use” is meant to capture: the fact that the same research, data, or technology may be used for different purposes and can generate both benefits and harms. The temptation is to frame this as a problem of “good” versus “bad” (mis)uses, or more classically, civilian (good) versus military (bad) applications. But that framing is far too simple and not especially helpful.
In contemporary research systems, knowledge and technologies rarely remain confined to a single domain. They move across contexts, sectors, and borders. They are recombined, adapted, and repurposed in ways that are difficult to predict.
The pertinence of dual use as a concept is that it shifts ethical attention away from intent and toward responsibility and the management of risk. The question is no longer only about what outcomes are possible, but how research is conducted, how risks are anticipated, how knowledge is shared, and who is positioned to act — and whether they have the means to do so — when concerns arise.
The challenge is thus of distributed responsibility: not because responsibility is diminished, but because no single actor has oversight over how knowledge or technologies can or will be used once they move to different contexts.
Responsibility is shared across the research process, but it differs in both its nature and its scope. Researchers shape how knowledge is produced and what risks are foreseeable. Institutions and funders influence how research is organized, supported, and incentivized. Governments regulate specific domains and uses, often only after technologies begin to circulate.
What matters is not only how knowledge is valued in different contexts, but how it is integrated and deployed. A technology developed for civilian purposes may later be adapted for security applications; data collected for health research may be reused in ways that stigmatize certain groups. At each stage, different actors are positioned to act, but none can claim full control.
This is where dual use becomes ethically demanding. It forces us to ask a different kind of question: who is in a position to notice risk across the research and innovation process, and what are they able or willing to do once that risk is identified?
That question arises long before knowledge is disseminated or a technology is deployed. It appears in decisions about research design, data sharing, collaboration, and publication. It also arises after the fact, when unintended harms emerge or responsibility is contested.
An important strength of the concept of dual use lies in its connection to the precautionary principle: when potential harms are serious, action may be warranted even in conditions of uncertainty. But precaution is not a simple solution. Too little, and institutions become complicit in avoidable harm. Too much, and research is constrained by fear or excessive control.
Dual use helps clarify what is at stake. It does not tell us what to prohibit or permit; rather, it pushes us to think in terms of proportional responses, where greater risks justify stronger forms of oversight, coordination, or constraint.
This concept is frequently misused. Sometimes dual use is invoked as a moral accusation, suggesting that researchers are reckless or ethically suspect. This tends to shut down dialogue and can create a culture of silence rather than encouraging critical reflection. At other times, it becomes a policy shortcut, used to justify broad restrictions without careful analysis. In practice, this protects institutions more than researchers or the public.
Used well, dual use makes visible how risks arise in research systems and how responsibility is shared by multiple actors.
For bioethics, this shared responsibility has practical implications. It means moving beyond a narrow focus on intent at the moment of ethical evaluation, and taking seriously how research is organized, governed, and incentivized. It requires paying attention to where responsibility sits at different stages of the research and innovation process — for example, when AI systems developed in research settings are later deployed in operational contexts where oversight is fragmented — and where it falls through the cracks.
Dual use does not resolve the tensions between openness and security, or between innovation and precaution. But it clarifies where bioethics has more work to do: engaging with the everyday conditions of research and innovation, where risks are diffuse, responsibility is unevenly distributed, and ethical questions rarely present themselves in a single decision.
__________________________________________
Bryn Williams-Jones is a Bioethics Professor in the Department of Social and Preventive Medicine, School of Public Health, Université de Montréal.



