Skip to main content

A comprehensive Python package for interpreting and explaining machine learning and deep learning models. Includes support for feature attention mechanisms and integrates popular explanation methods such as LIME, SHAP, Grad-CAM, permutation importance, and saliency maps. Offers a unified interface for tabular, image, and other data types to enhance model transparency and interpretability.

Project description

Model Explain


Downloads License: MIT PyPI version


model_explain is a Python package for interpreting and explaining machine learning models. It provides a unified interface for popular explanation techniques, including LIME, SHAP, and Grad-CAM and supports a wide range of models and data types.

Key Features

  • Unified API for LIME and SHAP explanations
  • Model-agnostic: works with any model supporting predict or predict_proba
  • Tabular, image, and text data support
  • Visualizations for both local and global explanations
  • Easy integration with scikit-learn, XGBoost, LightGBM, and more
  • Feature importance extraction and plotting
  • Interpretability for individual predictions and datasets

Installation

Install from PyPI:

pip install model-explain

Or, if you prefer to clone the repository and install manually:

git clone https://github.com/vaibhav-k/model_explain.git
cd model_explain
pip install .

Quick Start

Tabular Data Example (LIME)

from model_explain.explainers.lime_explainer import lime_explainer

# model: trained scikit-learn model
# X_test: pandas DataFrame of test features

explanation = lime_explainer(model, X_test, instance_index=0)
explanation.show_in_notebook()

Tabular Data Example (SHAP)

import shap
from model_explain.explainers.shap_explainer import shap_explainer

# model: trained machine learning model
# X_test: pandas DataFrame of test features

shap_values = shap_explainer(model, X_test)
shap.summary_plot(shap_values, X_test)

Image Data Example

from model_explain.explainers.grad_cam import GradCAM
import matplotlib.pyplot as plt

# model: your trained CNN model (e.g., from torchvision)
# image: a preprocessed image tensor of shape [1, C, H, W]
# predicted_class: integer index of the predicted class

explainer = GradCAM(model, target_layer_name="layer4")  # specify the last conv layer
heatmap = explainer(image, target_idx=predicted_class)

plt.imshow(heatmap, cmap="jet", alpha=0.5)
plt.title("Grad-CAM Heatmap")
plt.axis("off")
plt.show()

📝 Text Data Example

from model_explain.explainers.text_explainer import TextExplainer

# model: your trained text classification model
# class_names: list of class names for the model
# tokenizer: tokenizer for text preprocessing (if needed)

explainer = TextExplainer(model, class_names=["negative", "positive"])
explainer.explain_lime("This movie was surprisingly good!")

# For Transformers
explainer.explain_shap(["I love this movie!"], tokenizer=tokenizer)

💡 Quick Tips

  • Use LIME when:

    • You’re working with traditional ML models (TfidfVectorizer, CountVectorizer, etc.).
    • You need fast, approximate local explanations for many samples.
  • Use SHAP when:

    • Your model is a deep Transformer or you need precise, token-level insight.
    • You want globally consistent attributions (e.g., comparing feature importances across texts).
  • For best results, you can combine both:

    • Run LIME for a quick sanity check.
    • Use SHAP for detailed debugging and deeper interpretability.

Supported Models

  • Scikit-learn models
  • XGBoost
  • LightGBM
  • PyTorch models
  • Any model with predict or predict_proba methods

Visualizations

  • SHAP summary plot: global feature importance (use plot_feature_importance for custom bar plots)
  • LIME explanation plot: local feature contributions (use plot_feature_importance for instance-level contributions)
  • Image region importance (Grad-CAM heatmap): highlights spatial regions in images that most influence the model's prediction

Use Cases

  • Debugging and validating ML models
  • Regulatory compliance and transparency
  • Feature selection and engineering
  • Enhancing trust in AI systems
  • Explaining model predictions to stakeholders

Contributing

Contributions are welcome! Please read CONTRIBUTING.md for details on how to contribute.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_explain-0.4.1.tar.gz (29.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_explain-0.4.1-py3-none-any.whl (27.8 kB view details)

Uploaded Python 3

File details

Details for the file model_explain-0.4.1.tar.gz.

File metadata

  • Download URL: model_explain-0.4.1.tar.gz
  • Upload date:
  • Size: 29.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for model_explain-0.4.1.tar.gz
Algorithm Hash digest
SHA256 a122556975edec83981048d633f2aab1a4e6bed053d1a2830cb65ada40952a0d
MD5 3663f59596c360f0153eb9fcc7bc5248
BLAKE2b-256 76729008e66c7b3867a826da064cff8e1054ba034300698df1d0c642ab1ff3fa

See more details on using hashes here.

File details

Details for the file model_explain-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: model_explain-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 27.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for model_explain-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8d6ee3ef670b382c1509fd58bd34017ffd2d141cfa563c18c1b83f776bc801dc
MD5 274ee55f622f2610380dd82d74111158
BLAKE2b-256 6f14cb1d7a4264857ca7e1d3397df2cf8ea58753824cc05f7b4a0dcec6ca2cfa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page