The recently emerged field of explainable artificial intelligence (XAI) attempts to shed lights on 'black box' Machine Learning (ML) models in understandable terms for human. As several explanation methods are developed alongside different applications for a black box model, the need for expert-level evaluation in inspecting their effectiveness becomes inevitable. This is significantly important for sensitive domains such as medical applications where evaluation of experts is essential to better understand how accurate the results of complex ML are and debug the models if necessary. The aim of this study is to experimentally show how the expert-level evaluation of XAI methods in a medical application can be utilized and aligned with the actual explanations generated by the clinician. To this end, we collect annotations from expert subjects equipped with an eye-tracker while they classify medical images and devise an approach for comparing the results with those obtained from XAI methods. We demonstrate the effectiveness of our approach in several experiments.
|Title of host publication||ICPR-2020 Workshop Explainable Deep Learning-AI|
|Number of pages||12|
|Publisher||Springer Publishing Company|
|Publication status||Accepted/In press - 10 Nov 2020|
- Explainable AI (XAI), Deep learning, Expert-level explanation, XAI evaluation, Retinal Images, Eye-tracker