Expert level evaluations for explainable AI (XAI) methods in the medical domain

Satya Mahesh Muddamsetty, Mohammad Naser Sabet Jahromi, Thomas B. Moeslund

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceartikel i proceedingForskningpeer review

105 Downloads (Pure)


The recently emerged field of explainable artificial intelligence (XAI) attempts to shed lights on 'black box' Machine Learning (ML) models in understandable terms for human. As several explanation methods are developed alongside different applications for a black box model, the need for expert-level evaluation in inspecting their effectiveness becomes inevitable. This is significantly important for sensitive domains such as medical applications where evaluation of experts is essential to better understand how accurate the results of complex ML are and debug the models if necessary. The aim of this study is to experimentally show how the expert-level evaluation of XAI methods in a medical application can be utilized and aligned with the actual explanations generated by the clinician. To this end, we collect annotations from expert subjects equipped with an eye-tracker while they classify medical images and devise an approach for comparing the results with those obtained from XAI methods. We demonstrate the effectiveness of our approach in several experiments.
TitelICPR-2020 Workshop Explainable Deep Learning-AI
Antal sider12
ForlagSpringer Publishing Company
StatusAccepteret/In press - 10 nov. 2020

Fingeraftryk Dyk ned i forskningsemnerne om 'Expert level evaluations for explainable AI (XAI) methods in the medical domain'. Sammen danner de et unikt fingeraftryk.