Skip to main content

Generic Interpretability package

Project description

trelawney

https://img.shields.io/pypi/v/trelawney.svg https://img.shields.io/travis/aredier/trelawney.svg Documentation Status MIT License

Trelawney is a general interpretability package that aims at providing a common api to use most of the modern interpretability methods to shed light on sklearn compatible models (support for Keras and XGBoost are tested).

Trelawney will try to provide you with two kind of explanation when possible:

  • global explanation of the model that highlights the most importance features the model uses to make its predictions globally

  • local explanation of the model that will try to shed light on why a specific model made a specific prediction

The Trelawney package is build around:

  • some model specific explainers that use the inner workings of some types of models to explain them:
    • LogRegExplainer that uses the weights of the your logistic regression to produce global and local explanations of your model

    • TreeExplainer that uses the path of your tree (single tree model only) to produce explanations of the model

  • Some model agnostic explainers that should work with all models:
    • LimeExplainer that uses the Lime package to create local explanations only (the local nature of Lime prohibits it from generating global explanations of a model

    • ShapExplainer that uses the SHAP package to create local and global explanations of your model

    • SurrogateExplainer that creates a general surogate of your model (fitted on the output of your model) using an explainable model (DecisionTreeClassifier,`LogisticRegression` for now). The explainer will then use the internals of the surrogate model to explain your black box model as well as informing you on how well the surrogate model explains the black box one

Quick Tutorial (30s to Trelawney):

Here is an example of how to use a Trelawney explainer

>>> model = LogisticRegression().fit(X, y)
>>> # creating and fiting the explainer
>>> explainer = ShapExplainer()
>>> explainer.fit(model, X, y)
>>> # explaining observation
>>> explanation =  explainer.explain_local(X_expain)
[
    {'var_1': 0.1, 'var_2': -0.07, ...},
    ...
    {'var_1': 0.23, 'var_2': -0.15, ...} ,
]
>>> explanation =  explainer.graph_local_explanation(X_expain.iloc[:1, :])
Local Explanation Graph
>>> explanation =  explainer.feature_importance(X_expain)
{'var_1': 0.5, 'var_2': 0.2, ...} ,
>>> explanation =  explainer.graph_feature_importance(X_expain)
Local Explanation Graph

FAQ

Why should I use Trelawney rather than Lime and SHAP

while you can definitally use the Lime and SHAP packages directly (they will give you more control over how to use their packages), they are very specialized packages with different APIs, graphs and vocabulary. Trelawnaey offers you a unified API, representation and vocabulary for all state of the art explanation methods so that you don’t lose time adapting to each new method but just change a class and Trelawney will adapt to you.

Comming Soon

  • Regressor Support (PR welcome)

  • Image and text Support (PR welcome)

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

History

0.1.0 (2019-10-02)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trelawney-0.3.1.tar.gz (2.5 MB view hashes)

Uploaded Source

Built Distribution

trelawney-0.3.1-py2.py3-none-any.whl (14.9 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page