Looking Deeper into Tabular LIME - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Looking Deeper into Tabular LIME

Résumé

Interpretability of machine learning algorithms is an urgent need. Numerous methods appeared in recent years, but do their explanations make sense? In this paper, we present a thorough theoretical analysis of one of these methods, LIME, in the case of tabular data. We prove that in the large sample limit, the interpretable coefficients provided by Tabular LIME can be computed in an explicit way as a function of the algorithm parameters and some expectation computations related to the black-box model. When the function to explain has some nice algebraic structure (linear, multiplicative, or sparsely depending on a subset of the coordinates), our analysis provides interesting insights into the explanations provided by LIME. These can be applied to a range of machine learning models including Gaussian kernels or CART random forests. As an example, for linear functions we show that LIME has the desirable property to provide explanations that are proportional to the coefficients of the function to explain and to ignore coordinates that are not used by the function to explain. For partition-based regressors, on the other side, we show that LIME produces undesired artifacts that may provide misleading explanations.

Domaines

Autres [stat.ML]

Dates et versions

hal-02948641 , version 1 (24-09-2020)

Identifiants

Citer

Damien Garreau, Ulrike von Luxburg. Looking Deeper into Tabular LIME. 2020. ⟨hal-02948641⟩
52 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More