Skip to main content

A comprehensive Python package for interpreting and explaining machine learning and deep learning models. Includes support for feature attention mechanisms and integrates popular explanation methods such as LIME, SHAP, Grad-CAM, permutation importance, and saliency maps. Offers a unified interface for tabular, image, and other data types to enhance model transparency and interpretability.

Project description

Model Explain


License: MIT PyPI version


model_explain is a Python package for interpreting and explaining machine learning models. It provides a unified interface for popular explanation techniques, including LIME, SHAP, and Grad-CAM and supports a wide range of models and data types.

Key Features

  • Unified API for LIME and SHAP explanations
  • Model-agnostic: works with any model supporting predict or predict_proba
  • Tabular, image, and text data support
  • Visualizations for both local and global explanations
  • Easy integration with scikit-learn, XGBoost, LightGBM, and more
  • Feature importance extraction and plotting
  • Interpretability for individual predictions and datasets

Installation

Install from PyPI:

pip install model-explain

Or, if you prefer to clone the repository and install manually:

git clone https://github.com/vaibhav-k/model_explain.git
cd model_explain
pip install .

Quick Start

Tabular Data Example (LIME)

from model_explain.explainers.lime_explainer import lime_explainer

# model: trained scikit-learn model
# X_test: pandas DataFrame of test features

explanation = lime_explainer(model, X_test, instance_index=0)
explanation.show_in_notebook()

Tabular Data Example (SHAP)

import shap
from model_explain.explainers.shap_explainer import shap_explainer

# model: trained machine learning model
# X_test: pandas DataFrame of test features

shap_values = shap_explainer(model, X_test)
shap.summary_plot(shap_values, X_test)

Image Data Example

from model_explain.explainers.grad_cam import GradCAM
import matplotlib.pyplot as plt

# model: your trained CNN model (e.g., from torchvision)
# image: a preprocessed image tensor of shape [1, C, H, W]
# predicted_class: integer index of the predicted class

explainer = GradCAM(model, target_layer_name="layer4")  # specify the last conv layer
heatmap = explainer(image, target_idx=predicted_class)

plt.imshow(heatmap, cmap="jet", alpha=0.5)
plt.title("Grad-CAM Heatmap")
plt.axis("off")
plt.show()

🕒 TimeSeriesExplainer

✨ Features

  • Works with TensorFlow/Keras and PyTorch models
  • Computes per-timestep SHAP attributions
  • Generates:
    • Temporal heatmaps of feature contributions
    • Feature importance over time plots
from model_explain.explainers.time_series_explainer import TimeSeriesExplainer

# model: your trained time series model
# X: input data of shape (num_samples, timesteps, features)

explainer = TimeSeriesExplainer(model, background_data=X[:5])
shap_values = explainer.explain(X[0])
explainer.plot_heatmap(shap_values, feature_names=[f"f{i}" for i in range(4)])

📝 Text Data Example

from model_explain.explainers.text_explainer import TextExplainer

# model: your trained text classification model
# class_names: list of class names for the model
# tokenizer: tokenizer for text preprocessing (if needed)

explainer = TextExplainer(model, class_names=["negative", "positive"])
explainer.explain_lime("This movie was surprisingly good!")

# For Transformers
explainer.explain_shap(["I love this movie!"], tokenizer=tokenizer)

💡 Quick Tips

  • Use LIME when:

    • You’re working with traditional ML models (TfidfVectorizer, CountVectorizer, etc.).
    • You need fast, approximate local explanations for many samples.
  • Use SHAP when:

    • Your model is a deep Transformer or you need precise, token-level insight.
    • You want globally consistent attributions (e.g., comparing feature importances across texts).
  • For best results, you can combine both:

    • Run LIME for a quick sanity check.
    • Use SHAP for detailed debugging and deeper interpretability.

Supported Models

  • Scikit-learn models
  • XGBoost
  • LightGBM
  • PyTorch models
  • Any model with predict or predict_proba methods

Visualizations

  • SHAP summary plot: global feature importance (use plot_feature_importance for custom bar plots)
  • LIME explanation plot: local feature contributions (use plot_feature_importance for instance-level contributions)
  • Image region importance (Grad-CAM heatmap): highlights spatial regions in images that most influence the model's prediction
  • Time-series heatmaps: Visualize feature contributions over time steps
  • Time-series line plots: Demonstrate average feature importance across time steps

Use Cases

  • Debugging and validating ML models
  • Regulatory compliance and transparency
  • Feature selection and engineering
  • Enhancing trust in AI systems
  • Explaining model predictions to stakeholders

Contributing

Contributions are welcome! Please read CONTRIBUTING.md for details on how to contribute.


License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_explain-0.5.0.tar.gz (33.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_explain-0.5.0-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file model_explain-0.5.0.tar.gz.

File metadata

  • Download URL: model_explain-0.5.0.tar.gz
  • Upload date:
  • Size: 33.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for model_explain-0.5.0.tar.gz
Algorithm Hash digest
SHA256 2b409b5b3b8eb33b729470506c4bf8297b48d85092d5a604c03b8d3e647c22df
MD5 bf5a668c3a58d2b5d705637bb14624d2
BLAKE2b-256 36adb32396c65d9b5f423adb41311009780cdc4202a292d03a8000e55a86f53d

See more details on using hashes here.

File details

Details for the file model_explain-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: model_explain-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 30.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for model_explain-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 364991bfe831c64767dde26331d1f8b09b569f5d411dfd1f16dd4f00b8e122bf
MD5 b4438e31830b9a4ba87181e394b40d31
BLAKE2b-256 54c274b5872661268ba8f209e64c9613187e5787d26c515b423fbb3d20d3da07

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page