Position: XAI needs formal notions of explanation correctness

10/12/2024

Position: XAI needs formal notions of explanation correctness

A poster contribution was presented by Prof. Stefan Haufe (PTB, Germany) at the “Interpretable AI: Past, Present and Future Workshop”, within the 38th Annual Conference on Neural Information Processing Systems (NeurIPS), held in Vancouver (Canada), on 10–15 December 2024 (https://neurips.cc/).

The focus of the presented work is on the limits of explainable AI (XAI) methods in high-risk applications, like the medical ones. The work discusses how such limitation can be overcome by properly defining notions of explanation correctness, which can be theoretically verified, and objective metrics of explanation performance, which can be assessed using ground-truth data.

Authors: Stefan Haufe1,2,3, Rick Wilming2, Benedict Clark2, Rustam Zhumagambetov2, Danny Panknin2, and Ahcene Boubekki2

  1. Technische Universitat Berlin, Germany
  2. Physikalisch-Technische Bundesanstalt (PTB), Germany
  3. Charite – Universitatsmedizin Berlin, Germany