"This Chatbot Would Never...": Perceived Moral Agency of Mental Health Chatbots

Research output: Contribution to journalJournal articleResearchpeer-review

4 Citations (Scopus)
36 Downloads (Pure)

Abstract

Despite repeated reports of socially inappropriate and dangerous chatbot behaviour, chatbots are increasingly used as mental health services in providing support for young people. In sensitive settings as such, the notion of perceived moral agency (PMA) is crucial, given its critical role in human-human interactions. In this paper, we investigate the role of PMA in human-chatbot interactions. Specifically, we seek to understand how PMA influence the perception of trust, likeability, and perceived safety of chatbots for mental health across two distinct age groups. We conduct an online experiment (N = 279) to evaluate chatbots with low and high PMA as targeted towards teenagers and adults. Our results indicate increased trust, likeability, and perceived safety in mental health chatbots displaying high PMA. A qualitative analysis revealed four themes, assessing participants' expectations of mental health chatbots in general, as well as targeted towards teenagers: Anthropomorphism, Warmth, Sensitivity, and Appearance manifestation. We show that PMA plays a crucial role in influencing the perceptions of chatbots and provide recommendations for designing socially appropriate mental health chatbots.
Original languageEnglish
Article number133
JournalProceedings of the ACM on Human-Computer Interaction
Volume8
Issue numberCSCW1
Pages (from-to)1-28
Number of pages28
DOIs
Publication statusPublished - 26 Apr 2024

Keywords

  • Agency
  • Chatbot
  • Expectation
  • Human-Computer
  • Interaction
  • Mental health
  • Moral
  • Perception

Fingerprint

Dive into the research topics of '"This Chatbot Would Never...": Perceived Moral Agency of Mental Health Chatbots'. Together they form a unique fingerprint.

Cite this