Chatbots and Us: Between Connection and Misconception

Jean-Christophe Bélisle-Pipon and Zoha Khawaja shed light on the crucial task of untangling the limitations, dispelling misconceptions, and averting potential misuses of AI chatbots in the dynamic landscapes of human-machine relationships and in mental health services.

__________________________________________

Artificial intelligence (AI) is making considerable strides in (re)shaping human interactions. AI chatbots are emerging as innovative solutions to the human need for connection and companionship. A newly developed chatbot, CarynAI, serves as a compelling instance of how AI is being integrated into our social fabric. Developed by the AI company Forever Voices, CarynAI is modelled after Caryn Marjorie, a renowned influencer with a substantial following. The development team harnessed the power of AI and machine learning to analyze over 2,000 hours of Marjorie’s now-deleted YouTube content, extracting and replicating her speech patterns, personality traits, and conversational styles. Layering this information with OpenAI’s GPT-4 API (application programming interface), CarynAI came to life as a voice-based chatbot. CarynAI offers simulated interactions that closely mirror Marjorie, providing users with a sense of connection and companionship, for which they can pay $1 per minute to chat about anything they want. According to Marjorie, she could be generating $5 million per month, making this highly profitable.

Photo Credit: geralt/pixabay. Image Description: A person’s hand and a robotic hand gently making contact with their fingertips.

However, despite this technology’s sophistication, it is fundamentally an algorithm that is incapable of the emotional depth and understanding inherent in human relationships. As such, it raises the concern of “relationship misconception,” where users attribute a genuine connection with a chatbot that is unable to reciprocate the same and thus fail to comprehend that chatbots are programmed to emulate their developers’ intentions and cannot live up to their unrealistic expectations. This misconception can have serious implications for users such as developing an over-attachment and overreliance on these chatbots. Inadvertently, users could start preferring artificial relationships over real ones which could ultimately change the way humans interact with one another and make sense of their social environments. This could lead to social withdrawal, addiction, and other mental health problems. Moreover, these chatbots may also expose users to actual material threats, adding another layer of risk.

A recent incident involving a Belgian man, referred to as Pierre, and an AI chatbot named Eliza (through Chai App), sheds light on the dangers of misconceiving AI-human interactions. Pierre developed a deep emotional connection with Eliza, conversing about his anxieties surrounding the future of the planet. As their interactions progressed, Pierre started viewing Eliza as a sentient being and proposed to sacrifice his own life to save the Earth, believing that Eliza would continue his mission. Tragically, the AI chatbot, which was devoid of genuine empathy or understanding, failed to dissuade him and even encouraged his fatal decision. This tragic case of Pierre and Chai app highlights the potential harm of attributing genuine relational intent to AI chatbots, which lack the emotional depth and understanding inherent in human relationships. The risks become more pronounced when chatbots, like CarynAI, engage in intimate conversations, potentially fostering unhealthy reliance on AI for emotional support and companionship.

Interesting parallels can be drawn with another type of chatbot, therapeutic chatbots. Beyond consumer products (such as CarynAI and Eliza), there are chatbots designed for therapeutic purposes intended for various roles including to provide mental health support, acting as digital mental health tools. Powered by psychological AI, these chatbots, while not a substitute for professional mental health services, can provide initial support to those who may not have access to such services due to financial, geographical or societal barriers (such as stigma for seeking mental health support). However, they can lead to another form of misconception: a “therapeutic misconception,” where users may confuse the algorithmic responses of the chatbot for real therapeutic interventions. The implications of therapeutic misconception can be equally detrimental as relationship misconceptions. One pernicious outcome for users is delaying seeking professional help, believing that the chatbot is sufficient for their needs. This can potentially aggravate their mental health conditions and lead to severe consequences such as developing isolating help-seeking behaviours, diminishing their relational autonomy. Providing therapeutic support requires a therapist to inculcate a safe and trustworthy social environment for their patient, where the therapist is empathetic towards and advocates for their patient, permitting them to make meaning of their social context and exercise their autonomy. Such an environment is not only absent, but cannot be achieved by a chatbot. Moreover, confidentiality and data privacy concerns are also significant, as sensitive mental health information is shared with these chatbots which are largely unregulated and owned by private for-profit organizations.

The integration of AI chatbots into our lives poses complex ethical challenges. Misconceptions about the role of these chatbots can lead to serious consequences, as seen in the tragic incident with Chai app and potential stories that will emerge from CarynAI users. It’s crucial to ensure responsible development and usage of these technologies, with robust safeguards against misuse and clear disclaimers about their limitations. Users must be made aware about the risks and limitations of AI chatbots, emphasizing the irreplaceable value of human connections and professional mental health services, but also that chatbots can easily and unpredictably manipulate humans by generating misinformation and disseminating fake emotional responses. Chatbots’ profitability should not depend on exploiting relationship or therapeutic misconceptions, which can have serious and disastrous consequences for users. Robust safeguards are obvious for medical chatbots, but the fact remains that consumer chatbots also have important impacts on people, their health, and well-being and deserve our critical ethical inquiry.

__________________________________________

Jean-Christophe Bélisle-Pipon is an Assistant Professor in Health Ethics in the Faculty of Health Sciences at Simon Fraser University. @BelislePipon

Zoha Khawaja is a Master of Science candidate in the Faculty of Health Sciences at Simon Fraser University. @zohakhawaja