Entropic Variable Boosting for Explainability & Interpretability in Machine Learning - Université Toulouse III - Paul Sabatier - Toulouse INP Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2018

Entropic Variable Boosting for Explainability & Interpretability in Machine Learning

Résumé

In this paper, we present a new explainability formalism to make clear the impact of each variable on the predictions given by black-box decision rules. Our method consists in evaluating the decision rules on test samples generated in such a way that each variable is stressed incrementally while preserving the original distribution of the machine learning problem. We then propose a new computation-ally efficient algorithm to stress the variables, which only reweights the reference observations and predictions. This makes our methodology scalable to large datasets. Results obtained on standard machine learning datasets are presented and discussed.
Fichier principal
Vignette du fichier
Bagalouri.pdf (343.63 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01897642 , version 1 (17-10-2018)

Identifiants

Citer

François Bachoc, Fabrice Gamboa, Jean-Michel Loubes, Laurent Risser. Entropic Variable Boosting for Explainability & Interpretability in Machine Learning. 2018. ⟨hal-01897642⟩
204 Consultations
308 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More