Skip to main content

A library that helps to explain AI models in a really quick & easy way

Project description

Easy Explain

GitHub PyPI Test Licence Download Download Python Version Code style: black

Simplify the Explanation of AI Models

Unlock the "why" behind your AI models' decisions with easy-explain, a Python package designed to democratize access to advanced XAI algorithms. By integrating state-of-the-art explanation techniques with minimal code, we make AI transparency accessible to developers and researchers alike.

[!IMPORTANT] The new versions of easy-explain after 0.4.3 have breaking changes. We have changed the logic of different imports to support more models like YoloV8. Have a look at the provided examples.

Requirements

Python Versions Supported

  • Primary: 3.11
  • Also Supported: 3.9, 3.10

Ensure one of these Python versions is installed on your system to use easy-explain.

Install Environment & Dependencies

easy-explain can be seamlessly integrated into your projects with a straightforward installation process:

Installation as a Package

To incorporate easy-explain into your project as a dependency, execute the following command in your terminal:

pip install easy-explain

Features and Functionality

easy-explain uses under the hood different packages based on the model to be used. Captum is used for classification models and it aids to comprehend how the data properties impact the model predictions or neuron activations, offering insights on how the model performs. Captum comes together with Pytorch library. There are also other algorithms supported like GradCam or customade algorithms to support other models like the LRP implementation for YoloV8.

Currently, easy-explain specializes in specific cutting-edge XAI methodologies for images:

  • Occlusion: For deep insight into classification model decisions.
  • Cam: SmoothGradCAMpp & LayerCAM for explainability on image classification models.
  • Layer-wise Relevance Propagation (LRP): Specifically tailored for YoloV8 models, unveiling the decision-making process in object detection tasks.

Quick Start

To begin unraveling the intricacies of your model's decisions, import and utilize the corresponding classes as follows:

from easy_explain import OcclusionExplain

model = 'your-model'

occlusion_explain = OcclusionExplain(model=model)
vis_types=[["blended_heat_map", "original_image"]]
vis_signs = [["positive","all"]]

occlusion_explain.generate_explanation(image_url="your-image",total_preds=5,vis_types = vis_types, vis_signs = vis_signs, labels_path="your-labels-path")
from easy_explain import YOLOv8LRP

model = 'your-model'
image = 'your-image'

lrp = YOLOv8LRP(model, power=2, eps=1, device='cpu')

explanation_lrp = lrp.explain(image, cls='your-class', contrastive=False).cpu()

lrp.plot_explanation(frame=image, explanation = explanation_lrp, contrastive=True, cmap='seismic', title='Explanation for your class"')
from easy_explain import YOLOv8LRP

model = 'your-model'
image = 'your-image'

trans_params = {"ImageNet_transformation":
  {"Resize": {"h": 224,"w": 224},
  "Normalize": {"mean": [0.485, 0.456, 0.406], "std": [0.229, 0.224, 0.225]}}}

explainer = CAMExplain(model) 

input_tensor = explainer.transform_image(img, trans_params["ImageNet_transformation"])

explainer.generate_explanation(img, input_tensor, multiple_layers=["a_layer", "another_layer", "another_layer"])

For more information about how to begin have a look at the examples notebooks.

Examples

Explore how easy-explain can be applied in various scenarios:

Use Case Example

Use Case Example

Use Case Example

Use Case Example

How to contribute?

easy-explain thrives on community contributions, from feature requests and bug reports to code submissions. We encourage you to share your insights, improvements, and use cases to foster a collaborative environment for advancing XAI.

Getting Involved

Submit Issues: Encounter a bug or have a feature idea? Let us know through our issues page.

Code Contributions: Interested in contributing code? Please refer to our CONTRIBUTING guidelines for more information on how to get started..

Join us in making AI models more interpretable, transparent, and trustworthy with easy-explain.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

easy_explain-0.5.0.tar.gz (26.9 kB view details)

Uploaded Source

File details

Details for the file easy_explain-0.5.0.tar.gz.

File metadata

  • Download URL: easy_explain-0.5.0.tar.gz
  • Upload date:
  • Size: 26.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for easy_explain-0.5.0.tar.gz
Algorithm Hash digest
SHA256 b7643fe3b893aa09fe8847a9d4e6e701abe437472e9efd3d420d46281cd2d665
MD5 5930762d111a280adaf45029f660d339
BLAKE2b-256 04bcb3669e176c79d69b6e24c770dd5735ed4c814425a7215641d2006a7b7f26

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page