Abstract
The uptake of artificial intelligence-based applications raises concerns about the fairness and transparency of AI behaviour. Consequently, the Computer Science community calls for the involvement of the general public in the design and evaluation of AI systems. Assessing the fairness of individual predictors is an essential step in the development of equitable algorithms. In this study, we evaluate the effect of two common visualisation techniques (text-based and scatterplot) and the display of the outcome information (i.e., ground-truth) on the perceived fairness of predictors. Our results from an online crowdsourcing study (N = 80) show that the chosen visualisation technique significantly alters people’s fairness perception and that the presented scenario, as well as the participant’s gender and past education, influence perceived fairness. Based on these results we draw recommendations for future work that seeks to involve non-experts in AI fairness evaluations.
Original language | English |
---|---|
Title of host publication | CHI 2021 - Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems : Making Waves, Combining Strengths |
Number of pages | 13 |
Publisher | Association for Computing Machinery |
Publication date | 6 May 2021 |
Article number | 245 |
ISBN (Electronic) | 978-1-4503-8096-6 |
DOIs | |
Publication status | Published - 6 May 2021 |
Event | ACM CHI 2021 conference on human factors in computing systems - Online virtual, Yokohama, Japan Duration: 8 May 2021 → 13 May 2021 https://chi2021.acm.org/ |
Conference
Conference | ACM CHI 2021 conference on human factors in computing systems |
---|---|
Location | Online virtual |
Country/Territory | Japan |
City | Yokohama |
Period | 08/05/2021 → 13/05/2021 |
Internet address |
Keywords
- Ai
- Artifcial intelligence
- Crowdsourcing
- Fairness
- Layperson
- Machine learning
- Ml
- Predictor selection
- Transparency
- Visualisation