On Moral Manifestations in Large Language Models

Research output: Contribution to book/anthology/report/conference proceedingConference abstract in proceedingResearchpeer-review

70 Downloads (Pure)

Abstract

Since OpenAI released ChatGPT, researchers, policy-makers, and laypersons have raised concerns regarding its false and incorrect statements, which are furthermore expressed in an overly confident manner. We identify this flaw as part of its functionality and describe why large language models (LLMs), such as ChatGPT, should be understood as social agents manifesting morality. This manifestation happens as a consequence of human-like natural language capabilities, giving rise to humans interpreting the LLMs as potentially having moral intentions and abilities to act upon those intentions. We outline why appropriate communication between
people and ChatGPT relies on moral manifestations by exemplifying ‘overly confident’ communication of knowledge. Moreover, we put forward future research directions of fully autonomous and semi-functional systems, such as ChatGPT, by calling attention to how engineers, developers, and designers can facilitate end-users sense-making of LLMs by increasing moral transparency.
Original languageEnglish
Title of host publicationCHI ’23 EA: ACM CHI Conference on Human Factors in Computing Systems @ Workshop on Moral Agents,
Number of pages4
Publication date20 Mar 2023
Pages1-4
Publication statusPublished - 20 Mar 2023
Event2023 ACM CHI Conference on Human Factors in Computing Systems, CHI 23 - Hamburg, Germany
Duration: 23 Apr 202328 Apr 2023

Conference

Conference2023 ACM CHI Conference on Human Factors in Computing Systems, CHI 23
Country/TerritoryGermany
CityHamburg
Period23/04/202328/04/2023

Keywords

  • ChatGPT
  • Large Language Models
  • Social Agent
  • Moral Manifestation
  • Moral Cognition
  • Overconfidence

Fingerprint

Dive into the research topics of 'On Moral Manifestations in Large Language Models'. Together they form a unique fingerprint.

Cite this