Comparing Strategies for Post-Hoc
Explanations in Machine Learning
Models
Aabhas Vij and Preethi Nanjundan
Abstract Most of the machine learning models act as black boxes, and hence, the
need for interpreting them is rising. There are multiple approaches to understand the
outcomes of a model. But in order to be able to trust the interpretations, there is a need
to have a closer look at these approaches. This project compared three such frame-
works—ELI5, LIME and SHAP. ELI5 and LIME follow the same approach toward
interpreting the outcomes of machine learning algorithms by building an explainable
model in the vicinity of the datapoint that needs to be explained, whereas SHAP works
with Shapley values, a game theory approach toward assigning feature attribution.
LIME outputs an R-squared value along with its feature attribution reports which
help in quantifying the trust one must have in those interpretations. The R-squared
value for surrogate models within different machine learning models varies. SHAP
trades-off accuracy with time (theoretically). Assigning SHAP values to features is
a time and computationally consuming task, and hence, it might require sampling
beforehand. SHAP triumphs over LIME with respect to optimization of different
kinds of machine learning models as it has explainers for different types of machine
learning models, and LIME has one generic explainer for all model types.
Keywords Interpretability · LIME · SHAP · Explainable AI · ELI5
1 Introduction
Artificial intelligence (AI) is playing a major role in automating day–to-day tasks
in the real world. There is no denying that many business decisions have heavy
dependency on artificial intelligence [1]. The dependency is justified by the accuracy
of the models AI runs on. In the earlier era of machine learning, the models were
simpler and easier to explain [2]. As this era advances, the models are getting more
A. Vij · P. Nanjundan (B )
Christ (Deemed To Be University), Lavasa, India
e-mail: preethi.n@christuniversity.in
A. Vij
e-mail: aabhas.vij@science.christuniversity.in
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022
S. Shakya et al. (eds.), Mobile Computing and Sustainable Informatics, Lecture Notes on
Data Engineering and Communications Technologies 68,
https://doi.org/10.1007/978-981-16-1866-6_41
585