Skip to main content

A user-friendly python package for computing and plotting machine learning explainability output.

Project description

Unit Tests codecov Updates Python 3 Code style: black PyPI Documentation Status

scikit-explain is a user-friendly Python module for machine learning explainability. Current explainability products includes

These explainability methods are discussed at length in Christoph Molnar's Interpretable Machine Learning. The primary feature of this package is the accompanying built-in plotting methods, which are desgined to be easy to use while producing publication-level quality figures. The computations do leverage parallelization when possible. Documentation for scikit-explain can be found at https://scikit-explain.readthedocs.io/en/master/.

The package is under active development and will likely contain bugs or errors. Feel free to raise issues!

This package is largely original code, but also includes snippets or chunks of code from preexisting packages. Our goal is not take credit from other code authors, but to make a single source for computing several machine learning interpretation methods. Here is a list of packages used in scikit-explain: PyALE, PermutationImportance, ALEPython, SHAP, scikit-learn LIME

If you employ scikit-explain in your research, please cite this github and the relevant packages listed above.

If you are experiencing issues with loading the tutorial jupyter notebooks, you can enter the URL/location of the notebooks into the following address: https://nbviewer.jupyter.org/.

Install

scikit-explain can be installed through pip, but we are working on uploading to conda-forge.

pip install scikit-explain

Dependencies

scikit-explain is compatible with Python 3.8 or newer. scikit-explain requires the following packages:

numpy
scipy
pandas
scikit-learn
matplotlib
shap>=0.30.0
xarray>=0.16.0
tqdm
statsmodels
seaborn>=0.11.0

Initializing scikit-explain

The interface of scikit-explain is ExplainToolkit, which houses all of the explainability methods and their corresponding plotting methods. See the tutorial notebooks for examples.

import skexplain

# Loads three ML models (random forest, gradient-boosted tree, and logistic regression)
# trained on a subset of the road surface temperature data from Handler et al. (2020).
estimators = skexplain.load_models()
X,y = skexplain.load_data()

explainer = skexplain.ExplainToolkit(estimators=estimators,X=X,y=y,)

Permutation Importance

scikit-explain includes both single-pass and multiple-pass permutation importance method (Brieman et al. 2001], Lakshmanan et al. 2015, McGovern et al. 2019). scikit-explain also has accompanying plot package. In the tutorial, users have flexibility for making publication-quality figures.

perm_results = explainer.permutation_importance(n_vars=10, evaluation_fn='auc')
explainer.plot_importance(data=perm_results)

Sample notebook can be found here: Permutation Importance

Partial dependence and Accumulated Local Effects

To compute the expected functional relationship between a feature and an ML model's prediction, scikit-explain has partial dependence, accumulated local effects, or SHAP dependence. There is also an option for second-order interaction effects. For the choice of feature, you can manually select or can run the permutation importance and a built-in method will retrieve those features. It is also possible to configure the plot for readable feature names.

# Assumes the .permutation_importance has already been run.
important_vars = explainer.get_important_vars(results, multipass=True, nvars=7)

ale = explainer.ale(features=important_vars, n_bins=20)
explainer.plot_ale(ale)

Additionally, you can use the same code snippet to compute the second-order ALE (see the notebook for more details).

Sample notebook can be found here:

Feature Contributions

To explain specific examples, you can use SHAP values. scikit-explain uses the shap.Explainer method, which automatically determines the most appropriate Shapley value algorithm (see their docs). scikit-explain can create the summary and dependence plots from the shap python package, but is adapted for multiple features and an easier user interface. It is also possible to plot contributions for a single example or summarized by model performance.

import shap
single_example = examples.iloc[[0]]
explainer = skexplain.ExplainToolkit(estimators=estimators[0], X=single_example,)


shap_kwargs={'masker' : 
              shap.maskers.Partition(X, max_samples=100, clustering="correlation"), 
              'algorithm' : 'permutation'}

results = explainer.local_contributions(method='shap', shap_kwargs=shap_kwargs)
fig = explainer.plot_contributions(results)

explainer = skexplain.ExplainToolkit(estimators=estimators[0],X=X, y=y)

results = explainer.local_contributions(method='shap', shap_kwargs=shap_kwargs, performance_based=True,)
fig = myInterpreter.plot_contributions(results)

explainer = skexplain.ExplainToolkit(estimators=estimators[0],X=X, y=y)
                                
results = explainer.shap(shap_kwargs=shap_kwargs)
explainer.plot_shap(plot_type = 'summary', shap_values=results,) 

from skexplain.common import plotting_config

features = ['tmp2m_hrs_bl_frez', 'sat_irbt', 'sfcT_hrs_ab_frez', 'tmp2m_hrs_ab_frez', 'd_rad_d']
explainer.plot_shap(features=features,
                        plot_type = 'dependence',
                        shap_values=shap_values,
                        display_feature_names=plotting_config.display_feature_names,
                        display_units = plotting_config.display_units,
                        to_probability=True)

Sample notebook can be found here:

Tutorial notebooks

The notebooks provides the package documentation and demonstrate scikit-explain API, which was used to create the above figures. If you are experiencing issues with loading the jupyter notebooks, you can enter the URL/location of the notebooks into the following address: https://nbviewer.jupyter.org/.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scikit-explain-0.0.4.tar.gz (107.9 kB view hashes)

Uploaded Source

Built Distribution

scikit_explain-0.0.4-py2.py3-none-any.whl (35.1 MB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page