Artificial Intelligence and Health Care: The Importance of Location

Ryan Tonkens illustrates why a heath care system’s location matters for managing the ethical issues arising from the use of artificial intelligence in health care.

__________________________________________

Artificial Intelligence (AI) is making waves in health care delivery systems internationally.

AI applications are trained using techniques of deep machine learning. Humongous amounts of data (“Big Data”) are plugged into the system which are then processed and analyzed for patterns, and can be used to develop further algorithms to refine the system’s own learning, generate predictions about particular outcomes (such as predicting how busy an emergency room will be on a particular day), make recommendations to medical professionals (for example, about how to interpret a particular diagnostic image, like a mammogram), and identify the most appropriate diagnostic tool needed by a particular patient waiting in the emergency room and then order that procedure.

There are many ethical issues that accompany the use of AI in health care delivery, for example:

(i) Privacy of personal health information: Data used to train AI systems (typically) come from real people – through information contained in electronic health records, for instance. That data could potentially be used in ways that are inconsistent with what people have consented to, or that do not respect their privacy – for example if de-identified information is re-identified, or if that information is sold to private companies for commercial purposes.

(ii) Responsibility for harm: Because AI systems are autonomous to varying degrees; because there is often an opaque “black box” at the centre of the system’s decision making; and because there are many people involved in developing and implementing a particular system, determining who to hold accountable for wrongdoing if and when things go wrong is a challenging issue, currently with no clear consensus on a resolution.

Recent WHO guidelines (2021 and 2024) provide more detail on these and other ethical issues surrounding AI and health care. However, such documents are limited in the extent to which they can provide precise, detailed answers to the ethical questions for a particular place.

Photo Credit: Creative Commons. Image Description: Binary code.

There is no “one size fits all” answer to the ethics of AI in health care. The specific technology and its intended application make a moral difference, and, importantly, the details of a particular community, including the experience of people living within that community — their values, needs — significantly impact the answer to the ethics questions.

Foremost, the ethical questions need to be grounded in a particular geographical location. For example, “do people in Northwestern Ontario want to use AI in health care”? “Do they have access to required infrastructure (such as telecommunications networks)”? “Are AI applications addressing the needs of people living in Northwestern Ontario”?

AI in health care needs to be place-specific in another way as well, namely the sources of data used to program AI. If AI is going to be used in health care in community x, then people living in that area need to be directly involved in steering its development, including with training the AI (i.e. providing data representative of the people living in that area). If the data is not “locally-sourced”, then potential benefits of AI are relatively illusory, since the system will make predictions and recommendations based on generalized information from other population sets, forming (at best) an arbitrary basis for making predictions and recommendations for people living in that community. Moreover, there is widespread acknowledgment that AI systems can be biased, and using place-specific information to train the system is one way to try to avoid such biases.

Not only should the ethics questions be location-specific, but the potential benefits of AI in health care are relative to location as well. For example, in places most in need of family physicians, any — even slight — reduction in the bureaucratic workload of physicians within that community will open up time for that physician to see more patients or spend more time with patients they do see. In a community where access to a family physician is less challenging, this potential benefit of AI is not as pronounced. Similarly, in places where travelling long distances to receive medical care is common, having access to AI-driven diagnostics can be extremely beneficial for people living in those rural and remote areas.

However, just because AI may be able to assist with combatting a shortage in human health care professionals does not mean that efforts to increase the human health care workforce in community x should not continue. One potential ethical concern with AI in health care is that it may then be believed that, since AI is now available, the need for human health care workers is less significant, or else the underlying problem has been resolved.

The question presented to particular communities by the state and its medical institutions should not be “would you prefer to have AI health care, or else have very little or no health care?”, but rather, “what way(s) of incorporating AI-systems into health care delivery are most in line with your values and needs, and best promote the flourishing of people living in this community?”

__________________________________________

Ryan Tonkens is a bioethicist at Lakehead University’s Centre for Health Care Ethics, and Associate Professor at NOSM University.

This commentary is adapted from a previous presentation in the Encounters in Bioethics Series at the Centre for Health Care Ethics at Lakehead University entitled, “Ethics and AI in Health Care in the North”.