Adaptation of AI Explanations to Users' Roles

Julien Delaunay, Luis Galárraga, Christine Largouët, Niels van Berkel

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceartikel i proceedingForskningpeer review

Abstract

Surrogate explanations approximate a complex model by training a simpler model over an interpretable space. Among these simpler models, we identify three kinds of surrogate methods: (a) feature-attribution, (b) example-based, and (c) rule-based explanations. Each surrogate approximates the complex model differently, and we hypothesise that this can impact how users interpret the explanation. Despite the numerous calls for introducing explanations for all, no prior work has compared the impact of these surrogates on specific user roles (e.g., domain expert, developer). In this article, we outline a study design to assess the impact of these three surrogate techniques across different user roles.
OriginalsprogEngelsk
TitelAdjunct Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems - Workshop on Human-Centered Explainable AI
Antal sider7
Publikationsdato2023
Sider1-7
StatusUdgivet - 2023
Begivenhed2023 ACM CHI Conference on Human Factors in Computing Systems, CHI 23 - Hamburg, Tyskland
Varighed: 23 apr. 202328 apr. 2023

Konference

Konference2023 ACM CHI Conference on Human Factors in Computing Systems, CHI 23
Land/OmrådeTyskland
ByHamburg
Periode23/04/202328/04/2023

Fingeraftryk

Dyk ned i forskningsemnerne om 'Adaptation of AI Explanations to Users' Roles'. Sammen danner de et unikt fingeraftryk.

Citationsformater