Interpretability Methods for Graph Neural Networks (Tutorial)

Research output: Contribution to conference without publisher/journalPaper without publisher/journalResearchpeer-review

Abstract

The emerging graph neural network models (GNNs) have demonstrated great potential and success for downstream graph machine learning tasks, such as graph and node classifica- tion, link prediction, entity resolution, and question answering. However, neural networks are “black-box” – it is difficult to understand which aspects of the input data and the model guide the decisions of the network. Recently, several interpretability methods for GNNs have been developed, aiming at improving the model’s transparency and fairness, thus making them trustwor- thy in decision-critical applications, leading to democratization of deep learning approaches and easing their adoptions. The tutorial is designed to offer an overview of the state-of-the-art interpretability techniques for graph neural networks, including their taxonomy, evaluation metrics, benchmarking study, and ground truth. In addition, the tutorial discusses open problems and important research directions.
Original languageEnglish
Publication date10 Oct 2023
Number of pages4
Publication statusPublished - 10 Oct 2023
EventIn 10th IEEE International Conference on Data Science and Advanced Analytics (DSAA): Interpretability Methods for Graph Neural Networks (Tutorial) - Thessaloniki, Greece, Thessaloniki, Greece
Duration: 8 Oct 202313 Oct 2023

Conference

ConferenceIn 10th IEEE International Conference on Data Science and Advanced Analytics (DSAA)
LocationThessaloniki, Greece
Country/TerritoryGreece
CityThessaloniki
Period08/10/202313/10/2023

Keywords

  • graph neural networks, interpretability, explain- able AI

Fingerprint

Dive into the research topics of 'Interpretability Methods for Graph Neural Networks (Tutorial)'. Together they form a unique fingerprint.

Cite this