teex: A toolbox for the evaluation of explanations - Equipe Data, Intelligence and Graphs
Article Dans Une Revue Neurocomputing Année : 2023

teex: A toolbox for the evaluation of explanations

Résumé

We present teex, a Python toolbox for the evaluation of explanations. teex focuses on the evaluation of local explanations of the predictions of machine learning models by comparing them to ground-truth explanations. It supports several types of explanations: feature importance vectors, saliency maps, decision rules, and word importance maps. A collection of evaluation metrics is provided for each type. Real-world datasets and generators of synthetic data with ground-truth explanations are also contained within the library. teex contributes to research on explainable AI by providing tested, streamlined, user-friendly tools to compute quality metrics for the evaluation of explanation methods. Source code and a basic overview can be found at github.com/chus-chus/teex, and tutorials and full API documentation are at teex.readthedocs.io.
Fichier principal
Vignette du fichier
1-s2.0-S0925231223007658-main.pdf (1.78 Mo) Télécharger le fichier
Origine Publication financée par une institution

Dates et versions

hal-04468368 , version 1 (04-09-2024)

Identifiants

Citer

Jesus Antonanzas, Yunzhe Jia, Eibe Frank, Albert Bifet, Bernhard Pfahringer. teex: A toolbox for the evaluation of explanations. Neurocomputing, 2023, 555, pp.126642. ⟨10.1016/J.NEUCOM.2023.126642⟩. ⟨hal-04468368⟩
19 Consultations
1 Téléchargements

Altmetric

Partager

More