Expert level evaluations for explainable AI (XAI) methods in the medical domain

Satya Mahesh Muddamsetty, Mohammad Naser Sabet Jahromi, Thomas B. Moeslund

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

155 Downloads (Pure)

Abstract

The recently emerged field of explainable artificial intelligence (XAI) attempts to shed lights on 'black box' Machine Learning (ML) models in understandable terms for human. As several explanation methods are developed alongside different applications for a black box model, the need for expert-level evaluation in inspecting their effectiveness becomes inevitable. This is significantly important for sensitive domains such as medical applications where evaluation of experts is essential to better understand how accurate the results of complex ML are and debug the models if necessary. The aim of this study is to experimentally show how the expert-level evaluation of XAI methods in a medical application can be utilized and aligned with the actual explanations generated by the clinician. To this end, we collect annotations from expert subjects equipped with an eye-tracker while they classify medical images and devise an approach for comparing the results with those obtained from XAI methods. We demonstrate the effectiveness of our approach in several experiments.
Original languageEnglish
Title of host publicationICPR-2020 Workshop Explainable Deep Learning-AI
Number of pages12
PublisherSpringer Publishing Company
Publication statusAccepted/In press - 10 Nov 2020

Keywords

  • Explainable AI (XAI), Deep learning, Expert-level explanation, XAI evaluation, Retinal Images, Eye-tracker

Fingerprint

Dive into the research topics of 'Expert level evaluations for explainable AI (XAI) methods in the medical domain'. Together they form a unique fingerprint.

Cite this