On Moral Manifestations in Large Language Models

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceabstrakt i proceedingForskningpeer review

34 Downloads (Pure)

Abstract

Since OpenAI released ChatGPT, researchers, policy-makers, and laypersons have raised concerns regarding its false and incorrect statements, which are furthermore expressed in an overly confident manner. We identify this flaw as part of its functionality and describe why large language models (LLMs), such as ChatGPT, should be understood as social agents manifesting morality. This manifestation happens as a consequence of human-like natural language capabilities, giving rise to humans interpreting the LLMs as potentially having moral intentions and abilities to act upon those intentions. We outline why appropriate communication between
people and ChatGPT relies on moral manifestations by exemplifying ‘overly confident’ communication of knowledge. Moreover, we put forward future research directions of fully autonomous and semi-functional systems, such as ChatGPT, by calling attention to how engineers, developers, and designers can facilitate end-users sense-making of LLMs by increasing moral transparency.
OriginalsprogEngelsk
TitelCHI ’23: ACM CHI Conference on Human Factors in Computing Systems @ Workshop on Moral Agents,
Antal sider4
Publikationsdato20 mar. 2023
Sider1-4
StatusUdgivet - 20 mar. 2023

Fingeraftryk

Dyk ned i forskningsemnerne om 'On Moral Manifestations in Large Language Models'. Sammen danner de et unikt fingeraftryk.

Citationsformater