Does a lack of emotions make chatbots unfit to be psychotherapists?

Mental health chatbots (MHCBs) designed to support individuals in coping with mental health issues are rapidly advancing. Currently, these MHCBs are predominantly used in commercial rather than clinical contexts, but this might change soon. The question is whether this use is ethically desirable. Th...

Full description

Saved in:  
Bibliographic Details
Authors: Rahsepar Meadi, Mehrdad (Author) ; Bernstein, Justin S. (Author) ; Batelaan, Neeltje (Author) ; van Balkom, Anton J. L. M. (Author) ; Metselaar, Suzanne (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: Wiley-Blackwell 2024
In: Bioethics
Year: 2024, Volume: 38, Issue: 6, Pages: 503-510
IxTheo Classification:NBE Anthropology
ZD Psychology
ZG Media studies; Digital media; Communication studies
Further subjects:B Countertransference
B Artificial Intelligence
B chatbots
B Mental Health
B Empathy
B quality of care
Online Access: Volltext (kostenfrei)
Volltext (kostenfrei)
Description
Summary:Mental health chatbots (MHCBs) designed to support individuals in coping with mental health issues are rapidly advancing. Currently, these MHCBs are predominantly used in commercial rather than clinical contexts, but this might change soon. The question is whether this use is ethically desirable. This paper addresses a critical yet understudied concern: assuming that MHCBs cannot have genuine emotions, how this assumption may affect psychotherapy, and consequently the quality of treatment outcomes. We argue that if MHCBs lack emotions, they cannot have genuine (affective) empathy or utilise countertransference. Consequently, this gives reason to worry that MHCBs are (a) more liable to harm and (b) less likely to benefit patients than human therapists. We discuss some responses to this worry and conclude that further empirical research is necessary to determine whether these worries are valid. We conclude that, even if these worries are valid, it does not mean that we should never use MHCBs. By discussing the broader ethical debate on the clinical use of chatbots, we point towards how further research can help us establish ethical boundaries for how we should use mental health chatbots.
ISSN:1467-8519
Contains:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.13299