Position: XAI needs formal notions of explanation correctness

24/01/2025

Position: XAI needs formal notions of explanation correctness

Stefan Haufe1,2,3, Rick Wilming2, Benedict Clark2, Rustam Zhumagambetov2, Danny Panknin2, and Ahcene Boubekki2
1Technische Universitat Berlin, Germany
2Physikalisch-Technische Bundesanstalt (PTB), Germany
3Charite – Universitatsmedizin Berlin, Germany

Position: XAI needs formal notions of explanation correctness, The Thirty-Eighth Annual Conference on Neural Information Processing Systems, NeurIPS 2024 Workshop Interpretable AI, article number 5, 2024.

https://openreview.net/forum?id=g0I1h8JmtE

The focus is on the limits of explainable AI (XAI) methods in high-risk applications, like the medical ones. The work discusses how such limitation can be overcome by properly defining notions of explanation correctness, which can be theoretically verified, and objective metrics of explanation performance, which can be assessed using ground-truth data.