TY - GEN
T1 - Verifying Machine Unlearning with Explainable AI
AU - Pujol Vidal, Alex
AU - Johansen, Anders Skaarup
AU - Sabet Jahromi, Mohammad Naser
AU - Escalera Guerrero, Sergio
AU - Nasrollahi, Kamal
AU - Moeslund, Thomas B.
N1 - Conference code: 27
PY - 2025/4/25
Y1 - 2025/4/25
N2 - We investigate the effectiveness of Explainable AI (XAI) in verifying Machine Unlearning (MU) within the context of harbor front monitoring, focusing on data privacy and regulatory compliance. With the increasing need to adhere to privacy legislation such as the General Data Protection Regulation (GDPR), traditional methods of retraining ML models for data deletions prove impractical due to their complexity and resource demands. MU offers a solution by enabling models to selectively forget specific learned patterns without full retraining. We explore various removal techniques, including data relabeling, and model perturbation. Then, we leverage attribution-based XAI to discuss the effects of unlearning on model performance. Our proof-of-concept (The code and additional visualizations can be found at: https://github.com/ASJAAU/Explaining_unlearning.git) introduces feature importance as an innovative verification step for MU, expanding beyond traditional metrics and demonstrating techniques’ ability to reduce reliance on undesired patterns. Additionally, we propose two novel XAI-based metrics, Heatmap Coverage (HC) and Attention Shift (AS), to evaluate the effectiveness of these methods. This approach not only highlights how XAI can complement MU by providing effective verification, but also sets the stage for future research to enhance their joint integration.
AB - We investigate the effectiveness of Explainable AI (XAI) in verifying Machine Unlearning (MU) within the context of harbor front monitoring, focusing on data privacy and regulatory compliance. With the increasing need to adhere to privacy legislation such as the General Data Protection Regulation (GDPR), traditional methods of retraining ML models for data deletions prove impractical due to their complexity and resource demands. MU offers a solution by enabling models to selectively forget specific learned patterns without full retraining. We explore various removal techniques, including data relabeling, and model perturbation. Then, we leverage attribution-based XAI to discuss the effects of unlearning on model performance. Our proof-of-concept (The code and additional visualizations can be found at: https://github.com/ASJAAU/Explaining_unlearning.git) introduces feature importance as an innovative verification step for MU, expanding beyond traditional metrics and demonstrating techniques’ ability to reduce reliance on undesired patterns. Additionally, we propose two novel XAI-based metrics, Heatmap Coverage (HC) and Attention Shift (AS), to evaluate the effectiveness of these methods. This approach not only highlights how XAI can complement MU by providing effective verification, but also sets the stage for future research to enhance their joint integration.
KW - Explainability
KW - Machine Unlearning
KW - Right To Be Forgotten
KW - Object Counting
UR - http://www.scopus.com/inward/record.url?scp=105004253294&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-88223-4_32
DO - 10.1007/978-3-031-88223-4_32
M3 - Article in proceeding
SN - 978-3-031-88222-7
VL - 15619
T3 - International Conference on Pattern Recognition
SP - 458
EP - 473
BT - Pattern Recognition. ICPR 2024 International Workshops and Challenges
A2 - Palaiahnakote, Shivakumara
A2 - Schuckers, Stephanie
A2 - Ogier, Jean-Marc
A2 - Bhattacharya, Prabir
A2 - Pal, Umapada
A2 - Bhattacharya, Saumik
PB - Springer
T2 - 27th International Conference on Pattern Recognition
Y2 - 1 December 2024 through 5 December 2024
ER -