Abstract
Previous research on conversational, competitive, and cooperative systems suggests that people respond differently to humans and AI agents in terms of perception and evaluation of observed team-mate behavior. However, there has not been research examining the relationship between participants' protective behavior toward human/AI team-mates and their beliefs about their behavior. A study was conducted in which 32 participants played two sessions of a cooperative game, once with a "presumed" human and once with an AI team-mate; players could "draw fire" from a common enemy by "yelling" at it. Overwhelmingly, players claimed they "drew fire" on behalf of the presumed human more than for the AI team-mate; logged data indicates the opposite. The main contribution of this paper is to provide evidence of the mismatch in player beliefs about their actions and actual behavior with humans or agents and provides possible explanations for the differences.
Original language | English |
---|---|
Title of host publication | Conference Proceedings - The 30th ACM Conference on Human Factors in Computing Systems, CHI 2012 |
Number of pages | 10 |
Publication date | 24 May 2012 |
Pages | 2793-2802 |
ISBN (Print) | 9781450310154 |
DOIs | |
Publication status | Published - 24 May 2012 |
Externally published | Yes |
Event | 30th ACM Conference on Human Factors in Computing Systems, CHI 2012 - Austin, TX, United States Duration: 5 May 2012 → 10 May 2012 |
Conference
Conference | 30th ACM Conference on Human Factors in Computing Systems, CHI 2012 |
---|---|
Country/Territory | United States |
City | Austin, TX |
Period | 05/05/2012 → 10/05/2012 |
Sponsor | ACM Spec. Interest Group Comput.-Hum. Interact. (ACM SIGCHI), Autodesk, Bloomberg, Google, ebaY |
Keywords
- CASA
- CSCP
- Team-mate