Jean-Christophe Bélisle-Pipon warns that as artificial intelligence (AI) agents begin hiring humans for physical tasks, we must ensure this inversion of labour does not reduce health care to a series of gig-economy transactions directed by algorithms.
__________________________________________
“Robots need your body.”
This pitch for RentAHuman.ai sounds like the opening line of a dystopian satire. It is actually the slogan for a new marketplace where autonomous AI agents hire human beings to perform physical tasks. AI is becoming sophisticated, but it lacks hands. Until physical AI catches up, AI agents need us to serve as their actuators in the physical world, or as the platform describes it without irony, their “meatspace layer.” Current listings range from the mundane task of picking up packages to bizarre requests like “touching grass” for an algorithm that wants to vicariously experience nature. Humans are paid in cryptocurrency to serve as the physical extension of a digital will.
Rent-A-Human makes no effort to frame the human worker as a collaborator or professional. The human is engaged as an actuator, a living interface capable of performing bounded actions in environments the system itself cannot access. What is purchased is execution under constraint. Complex goals are decomposed into discrete tasks, each defined in advance, verified after the fact, and compensated at a fixed rate.
This same logic has been shaping healthcare for decades, though it is rarely named so directly. The practice of care has been progressively disassembled into measurable units, not as a response to clinical need, but as a condition of administrative legibility. Time-and-motion studies, Relative Value Units, standardized care pathways, and compressed appointment slots all participate in a single project: rendering care divisible, auditable, and optimizable. The relational, interpretive, and improvisational dimensions of clinical work are not eliminated, but they are systematically displaced, tolerated as residual rather than protected as essential.
Within this architecture, clinicians are increasingly positioned not as moral agents navigating uncertainty, but as biological peripherals embedded in bureaucratic systems. Empathy is reframed as inefficiency. Presence is treated as unproductive time. These qualities persist rhetorically, invoked in mission statements and professional codes, but they are structurally discounted by the systems that govern everyday practice. Treating skilled clinicians as interchangeable units is not an unintended consequence of this design, but a functional requirement, achieved through the systematic erasure of context, continuity, and responsibility.
This mode of organization depends on the active suppression of friction. In technical systems, friction is waste. In care, friction is a signal. It is the pause that allows concern to surface, the hesitation that interrupts an otherwise compliant workflow, the moment when something does not align even though the system reports normalcy. Friction is where responsibility emerges and where judgment acquires meaning. Systems optimized to eliminate it do not merely streamline care; they reconfigure its moral structure.
When an AI agent hires a human to verify a physical environment, it openly acknowledges its own limitations. It admits blindness and dependency. Healthcare institutions rely on clinicians in much the same way, treating them as sensors and actuators within larger systems, yet they deny that this is the role being assigned. They extract emotional labor while measuring only throughput, and they invoke professionalism while enforcing algorithmic schedules that render professional judgment functionally irrelevant.
As I have argued previously, care resides in tacit knowledge; the reassuring touch, the ability to read a silence, the subtle observation of a patient’s declining mood. These are not data points to be captured by a “human sensor” and fed back to a central model; they are the therapeutic intervention itself. If we allow the logic of the gig economy to merge with health AI, we risk creating a class of “care technicians” who are accountable to an algorithm rather than to the patient. The AI might verify that the task was completed (perhaps demanding a photo, as RentAHuman does), but it cannot measure whether the care was compassionate.
This contradiction is now widely described as moral injury: not simple exhaustion, but the psychological wound that arises when individuals are compelled to act against their own ethical standards in service of institutional demands they neither designed nor control. It is not a failure of resilience, but a predictable outcome of governance by optimization. Rent-A-Human removes the contradiction entirely. It does not ask the worker to internalize institutional values. It does not pretend that autonomy exists where it does not.
Public anxiety about AI in healthcare remains focused on replacement, on the fear that machines will displace human clinicians, for instance. The more plausible trajectory is subtler and more corrosive. AI systems will not eliminate clinical labour; they will intermediate it, assigning micro-tasks, monitoring compliance, and arbitrating value. They will function as middle management, automating bureaucracy rather than care. In such a configuration, presence becomes optional, listening discretionary, and anything that cannot be verified or priced becomes suspect. Care persists only as residue. Rent-A-Human is not a warning about what may come. It is diagnostic of what has already been normalized, rendered visible by an algorithm that is simply more honest than the systems it mirrors.
__________________________________________
Jean-Christophe Bélisle-Pipon is an Assistant Professor in Health Ethics, Faculty of Health Sciences, Simon Fraser University.



