Abstract
The emerging graph neural network models (GNNs) have demonstrated great potential and success for downstream graph machine learning tasks, such as graph and node classifica- tion, link prediction, entity resolution, and question answering. However, neural networks are “black-box” – it is difficult to understand which aspects of the input data and the model guide the decisions of the network. Recently, several interpretability methods for GNNs have been developed, aiming at improving the model’s transparency and fairness, thus making them trustwor- thy in decision-critical applications, leading to democratization of deep learning approaches and easing their adoptions. The tutorial is designed to offer an overview of the state-of-the-art interpretability techniques for graph neural networks, including their taxonomy, evaluation metrics, benchmarking study, and ground truth. In addition, the tutorial discusses open problems and important research directions.
Original language | English |
---|---|
Publication date | 10 Oct 2023 |
Number of pages | 4 |
Publication status | Published - 10 Oct 2023 |
Event | In 10th IEEE International Conference on Data Science and Advanced Analytics (DSAA): Interpretability Methods for Graph Neural Networks (Tutorial) - Thessaloniki, Greece, Thessaloniki, Greece Duration: 8 Oct 2023 → 13 Oct 2023 |
Conference
Conference | In 10th IEEE International Conference on Data Science and Advanced Analytics (DSAA) |
---|---|
Location | Thessaloniki, Greece |
Country/Territory | Greece |
City | Thessaloniki |
Period | 08/10/2023 → 13/10/2023 |
Keywords
- graph neural networks, interpretability, explain- able AI