Jean-Christophe Bélisle-Pipon argues that defaulting to AI in health settings could do more harm than good.
__________________________________________
Last month, Shopify CEO Tobi Lütke made headlines after publicly sharing a leaked internal memo mandating that before anyone at the Canadian e-commerce giant requests new hires, they must first prove that artificial intelligence (AI) cannot do the job. “AI should be the default tool,” he insisted, weaving AI literacy into employee evaluations and promoting what he called an “AI-native” culture.
Now imagine if a Canadian hospital issued the same memo. What if a health authority required nurses, physicians, administrative staff, and allied health professionals to demonstrate that AI could not perform their roles before being allowed to hire? What if automation wasn’t simply a support, but the first and mandatory consideration in every decision—clinical, operational, or relational? Such a scenario may sound far-fetched, but it isn’t. Canada’s healthcare systems are under immense pressure to “modernize,” often code for adopting digital technologies to reduce expenditures. Amid ongoing staffing crises, aging populations, and political inaction on structural reform, AI is increasingly positioned as a silver bullet. Automated triage tools, virtual agents, predictive analytics, and synthetic data models are already being introduced with promises of greater efficiency, safety, and speed. The language of lean operations and intelligent systems is creeping into the moral fabric of healthcare delivery.

Photo Credit: Wikimedia Commons. Image Description: A computer-generated image showing the words “artificial intelligence.”
But the Shopify memo raises a red flag when transposed onto healthcare. There’s a stark difference between cutting bureaucratic overhead and replacing human-centered care with algorithmic proxies. Efficiency, while important, is not inherently virtuous, especially when it comes at the cost of access, equity, and empathy.
Healthcare is not a tech company. Patients are not users. Care is not code. And compassion is not a line item.
AI can indeed support meaningful improvements. Automating claims processing, back-office logistics, or automated patient messaging could free up staff time. AI-assisted diagnostics may help flag anomalies in imaging faster. Natural language processing can accelerate clinical trial recruitment or documentation. These are useful applications that deserve exploration.
But a system that defaults to AI not just as a tool, but as a precondition for hiring human workers, risks crossing an ethical threshold. It moves from augmenting care to redefining it. When automation becomes the benchmark against which human labour must justify its existence, the very premise of care (as a relational, responsive, and deeply human process) is at stake.
Consider nurses and personal support workers, whose work is routinely undervalued despite being foundational to patient safety and dignity. Much of what they do (soothing an anxious patient, recognizing subtle changes in mood or pain, offering reassurance during vulnerable moments) resides in forms of tacit knowledge that no AI currently understands. These are not inefficiencies to be optimized away. They are the essential fabric of compassionate care.
More troubling still is how little public attention is given to the values embedded in AI procurement itself. In an era of tight budgets, new tools are often acquired to justify cost-saving. But who decides which tasks are “AI-appropriate,” and based on whose standards? If compassion (including digital forms of compassion) is not built into how we assess and implement these technologies, we risk designing systems that reward the measurable at the expense of the meaningful.
This requires more than ethical add-ons. It requires frameworks that treat caring as a core competency, not a codable soft skill; that weigh appropriateness alongside efficiency; that refuse to confuse cutting costs with delivering value.
There is indeed an urgent need to manage finite healthcare resources wisely. But we must distinguish between waste and care. Reducing administrative bloat is not the same as reducing patient contact time. Decreasing paperwork isn’t equivalent to decreasing presence. The difference between a costly system and a careful one lies in what we choose to measure, and what we refuse to commodify.
None of this is an argument against AI in healthcare. The promise of AI is real, and it can be responsibly harnessed. But AI must be integrated in ways that affirm, not erode, the ethics of care. That means developing and applying criteria that evaluate not just what AI can do, but what it should do, and when human presence is irreplaceable.
This is where concepts like digital compassion matter: not as a sentimental overlay, but as a design principle and evaluative lens. AI tools in health must be judged by how well they preserve dignity, enhance relationships, and respond to vulnerability; not just how fast they triage or how many tasks they automate.
If AI becomes a prerequisite for hiring and if we must prove that a machine can’t care before authorizing someone who can, we haven’t just optimized a workflow. We’ve automated a worldview. And in healthcare, that’s a dangerous shift. It’s a fundamental betrayal of what caring means.
__________________________________________
Jean-Christophe Bélisle-Pipon is an Assistant Professor in Health Ethics, Faculty of Health Sciences, Simon Fraser University.


