ibreakdown - model agnostic explanations with interactions
ibreakdown is model agnostic predictions explainer with interactions support, library can show contribution of each feature in your prediction value.
SHAP or LIME consider only local additive feature attributions, when ibreakdown also evaluates local feature interactions.
Algorithm is based on ideas describe in paper “iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models” https://arxiv.org/abs/1903.11420 and reference implementation in R (iBreakDown)
Intuition behind algorithm is following:
The algorithm works in a similar spirit as SHAP or Break Down but is not restricted to additive effects. The intuition is the following: 1. Calculate a single-step additive contribution for each feature. 2. Calculate a single-step contribution for every pair of features. Subtract additive contribution to assess the interaction specific contribution. 3. Order interaction effects and additive effects in a list that is used to determine sequential contributions. This simple intuition may be generalized into higher order interactions.
In depth explanation can be found in algorithm authors free book: Predictive Models: Explore, Explain, and Debug https://pbiecek.github.io/PM_VEE/iBreakDown.html
Supports predictions explanations for classification and regression
Easy to use API.
Works with pandas and numpy
Support interactions between features
Installation process is simple, just:
$ pip install ibreakdown
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.