Skip to main content

Shapash is a Python library which aims to make machine learning interpretable and understandable by everyone.

Project description

tests pypi downloads pyversion license doc

🔍 Overview

Shapash is a Python library designed to make machine learning interpretable and comprehensible for everyone. It offers various visualizations with clear and explicit labels that are easily understood by all.

With Shapash, you can generate a Webapp that simplifies the comprehension of interactions between the model's features, and allows seamless navigation between local and global explainability. This Webapp enables Data Scientists to effortlessly understand their models and share their results with both data scientists and non-data experts.

Additionally, Shapash contributes to data science auditing by presenting valuable information about any model and data in a comprehensive report.

Shapash is suitable for Regression, Binary Classification and Multiclass problems. It is compatible with numerous models, including Catboost, Xgboost, LightGBM, Sklearn Ensemble, Linear models, and SVM. For other models, solutions to integrate Shapash are available; more details can be found here.

[!NOTE] If you want to give us feedback : Feedback form

Shapash App Demo

🌱 Documentation and resources

🎉 What's new ?

Version New Feature Description Tutorial
2.3.x Additional dataset columns
New demo
Article
In Webapp: Target and error columns added to dataset and possibility to add features outside the model for more filtering options
2.3.x Identity card
New demo
Article
In Webapp: New identity card to summarize the information of the selected sample
2.2.x Picking samples
Article
New tab in the webapp for picking samples. The graph represents the "True Values Vs Predicted Values"
2.2.x Dataset Filter
New tab in the webapp to filter data. And several improvements in the webapp: subtitles, labels, screen adjustments
2.0.x Refactoring Shapash
Refactoring attributes of compile methods and init. Refactoring implementation for new backends
1.7.x Variabilize Colors
Giving possibility to have your own colour palette for outputs adapted to your design
1.6.x Explainability Quality Metrics
Article
To help increase confidence in explainability methods, you can evaluate the relevance of your explainability using 3 metrics: Stability, Consistency and Compacity
1.4.x Groups of features
Demo
You can now regroup features that share common properties together.
This option can be useful if your model has a lot of features.
1.3.x Shapash Report
Demo
A standalone HTML report that constitutes a basis of an audit document.

🔥 Features

  • Display clear and understandable results: plots and outputs use explicit labels for each feature and its values

  • Allow Data Scientists to quickly understand their models using a webapp to easily navigate between global and local explainability, and understand how the different features contribute: Live Demo Shapash-Monitor

  • Summarize and export local explanation

Shapash provides concise and clear local explanations, It allows each user, enabling users of any Data background to understand a local prediction of a supervised model through a summarized and explicit explanation

  • Evaluate the quality of your explainability with various metrics

  • Effortlessly share and discuss results with non-Data users

  • Select subsets for in-depth analysis of explainability by filtering based on explanatory and additional features, as well as correct or wrong predictions. Picking Examples to Understand Machine Learning Model

  • Deploy interpretability part of your project: From model training to deployment (API or Batch Mode)

  • Contribute to the auditability of your model by generating a standalone HTML report of your projects. Report Example

We believe that this report will offer valuable support for auditing models and data, leading to improved AI governance. Data Scientists can now provide anyone interested in their project with a document that captures various aspects of their work as the foundation for an audit report. This document can be easily shared among teams (internal audit, DPO, risk, compliance...).

⚙️ How Shapash works

Shapash is an overlay package for libraries focused on model interpretability. It uses Shap or Lime backend to compute contributions. Shapash builds upon the various steps required to create a machine learning model, making the results more understandable.

Shapash is suitable for Regression, Binary Classification or Multiclass problem.
It is compatible with numerous models: Catboost, Xgboost, LightGBM, Sklearn Ensemble, Linear models, SVM.

If your model is not in the list of compatible models, it is possible to provide Shapash with local contributions calculated with shap or another method. Here's an example of how to provide contributions to Shapash. An issue has been created to enhance this use case.

Shapash can use category-encoders object, sklearn ColumnTransformer or simply features dictionary.

  • Category_encoder: OneHotEncoder, OrdinalEncoder, BaseNEncoder, BinaryEncoder, TargetEncoder
  • Sklearn ColumnTransformer: OneHotEncoder, OrdinalEncoder, StandardScaler, QuantileTransformer, PowerTransformer

🛠 Installation

Shapash is intended to work with Python versions 3.9 to 3.12. Installation can be done with pip:

pip install shapash

In order to generate the Shapash Report some extra requirements are needed. You can install these using the following command :

pip install shapash[report]

If you encounter compatibility issues you may check the corresponding section in the Shapash documentation here.

🕐 Quickstart

The 4 steps to display results:

  • Step 1: Declare SmartExplainer Object

    There 1 mandatory parameter in compile method: Model You can declare features dict here to specify the labels to display

from shapash import SmartExplainer

xpl = SmartExplainer(
    model=regressor,
    features_dict=house_dict,  # Optional parameter
    preprocessing=encoder,  # Optional: compile step can use inverse_transform method
    postprocessing=postprocess,  # Optional: see tutorial postprocessing
)
  • Step 2: Compile Dataset, ...

    There 1 mandatory parameter in compile method: Dataset

xpl.compile(
    x=xtest,
    y_pred=y_pred,  # Optional: for your own prediction (by default: model.predict)
    y_target=yTest,  # Optional: allows to display True Values vs Predicted Values
    additional_data=xadditional,  # Optional: additional dataset of features for Webapp
    additional_features_dict=features_dict_additional,  # Optional: dict additional data
)
  • Step 3: Display output

    There are several outputs and plots available. for example, you can launch the web app:

app = xpl.run_app()

Live Demo Shapash-Monitor

  • Step 4: Generate the Shapash Report

    This step allows to generate a standalone html report of your project using the different splits of your dataset and also the metrics you used:

xpl.generate_report(
    output_file="path/to/output/report.html",
    project_info_file="path/to/project_info.yml",
    x_train=xtrain,
    y_train=ytrain,
    y_test=ytest,
    title_story="House prices report",
    title_description="""This document is a data science report of the kaggle house prices tutorial project.
        It was generated using the Shapash library.""",
    metrics=[{"name": "MSE", "path": "sklearn.metrics.mean_squared_error"}],
)

Report Example

  • Step 5: From training to deployment : SmartPredictor Object

    Shapash provides a SmartPredictor object to deploy the summary of local explanation for the operational needs. It is an object dedicated to deployment, lighter than SmartExplainer with additional consistency checks. SmartPredictor can be used with an API or in batch mode. It provides predictions, detailed or summarized local explainability using appropriate wording.

predictor = xpl.to_smartpredictor()

See the tutorial part to know how to use the SmartPredictor object

📖 Tutorials

This github repository offers many tutorials to allow you to easily get started with Shapash.

Overview
Charts and plots
Different ways to use Encoders and Dictionaries
Displaying data with postprocessing

Using postprocessing parameter in compile method

Using different backends
Evaluating the quality of your explainability
Generate a report of your project
Analysing your model via Shapash WebApp

🤝 Contributors

🏆 Awards

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shapash-2.7.2.tar.gz (9.8 MB view details)

Uploaded Source

Built Distribution

shapash-2.7.2-py3-none-any.whl (10.0 MB view details)

Uploaded Python 3

File details

Details for the file shapash-2.7.2.tar.gz.

File metadata

  • Download URL: shapash-2.7.2.tar.gz
  • Upload date:
  • Size: 9.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for shapash-2.7.2.tar.gz
Algorithm Hash digest
SHA256 d0dcab1cd44b69d8c38b113f97d34fa593dfaa8349fa2282911aebadd3b005d4
MD5 dfdda509f0ae7f20109cd3dfb7e214f9
BLAKE2b-256 fca7ac100a71a8187a6fd8c2dfe66dc90968ab1ed21f8577495377013a2ed393

See more details on using hashes here.

File details

Details for the file shapash-2.7.2-py3-none-any.whl.

File metadata

  • Download URL: shapash-2.7.2-py3-none-any.whl
  • Upload date:
  • Size: 10.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for shapash-2.7.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fc8aa0aada028c7b9d4cb0ea15d5e6660c91fb45ecc7cfd262988a836510ffc0
MD5 6eb66d102ec0529e3604b123bc8aa8c2
BLAKE2b-256 297ba074878023bf8b7d3c772c4eabb09a28a91d91ca6e9b7f6c7437f3cd0a7b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page