Skip to main content

OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models

Project description

OpenVINO™ Explainable AI Toolkit - OpenVINO XAI


FeaturesInstallQuick StartLicenseDocumentation

Python OpenVINO codecov License PyPI Downloads


OpenVINO XAI Concept

OpenVINO™ Explainable AI (XAI) Toolkit provides a suite of XAI algorithms for visual explanation of OpenVINO™ as well as PyTorch and ONNX models.

Given AI models and input images, OpenVINO XAI generates saliency maps which highlights regions of the interest in the inputs from the models' perspective to help users understand the reason why the complex AI models output such responses.

Using this package, you can augment the model analysis & explanation feature on top of the existing OpenVINO inference pipeline with a few lines of code.

import openvino_xai as xai

explainer = xai.Explainer(model=ov_model, task=xai.Task.CLASSIFICATION)

# Existing inference pipeline
for i, image in enumerate(images):
    labels = infer(model=ov_model, image=image)

    # Model analysis
    explanation = explainer(data=image, targets=labels)
    explanation.save(dir_path="./xai", name=str(i))

Features

What's new in v1.1.0

  • Support PyTorch models with insert_xai() API for saliency map generation on PyTorch / ONNX runtime
  • Support OpenVINO IR (.xml) / ONNX (.onnx) model files for Explainer
  • Enable AISE method: Adaptive Input Sampling for Explanation of Black-box Models
  • Add Pointing Game, Insertion-Deletion AUC and ADCC quality metrics for saliency maps
  • Upgrade OpenVINO to 2024.4.0
  • Add saliency map visualization with explanation.plot()
  • Enable flexible naming for saved saliency maps and include confidence scores

Please refer to the change logs for the full release history.

Supported XAI methods

At the moment, Image Classification and Object Detection tasks are supported for the Computer Vision domain. Black-Box (model agnostic but slow) methods and White-Box (model specific but fast) methods are supported:

Domain Task Type Algorithm Links
Computer Vision Image Classification White-Box ReciproCAM arxiv / src
VITReciproCAM arxiv / src
ActivationMap experimental / src
Black-Box AISEClassification src
RISE arxiv / src
Object Detection White-Box DetClassProbabilityMap experimental / src
Black-Box AISEDetection src

See more method comparison at the User Guide.

Supported explainable models

Most of CNNs and Transformer models from Pytorch Image Models (timm) are supported and validated.

Please refer to the following known issues for unsupported models and reasons.

NOTE: GenAI / LLMs would be also supported incrementally in the upcoming releases.


Installation

NOTE: OpenVINO XAI works on Python 3.10 or higher

Set up environment
# Create virtual env.
python3.10 -m venv .ovxai

# Activate virtual env.
source .ovxai/bin/activate

Install from PyPI package

# Base package (for normal use):
pip install openvino_xai

# Dev package (for development):
pip install openvino_xai[dev]
Install from source
# Clone the source repository
git clone https://github.com/openvinotoolkit/openvino_xai.git
cd openvino_xai

# Editable mode (for development):
pip install -e .[dev]
(Optional) Enable PyTorch support

You can enjoy the PyTorch XAI feature if the PyTorch is installed along with the OpenVINO XAI.

# Install PyTorch (CPU version as example)
pip3 install torch --index-url https://download.pytorch.org/whl/cpu

Please refer to the PyTorch Installation Guide for other options.

Verify installation
# Run tests
pytest -v -s ./tests/unit

# Run code quality checks
pre-commit run --all-files

Quick Start

Hello, OpenVINO XAI

Let's imagine the case that our OpenVINO model is up and running on a inference pipeline. While watching the outputs, we may want to analyze the model's behavior for debugging or understanding purposes.

By using the OpenVINO XAI Explainer, we can visualize why the model gives such responses. In this example, we are trying to know the reason why the model outputs a cheetah label for the given input image.

import cv2
import numpy as np
import openvino as ov
import openvino_xai as xai

# Load the model: IR or ONNX
ov_model: ov.Model = ov.Core().read_model("mobilenet_v3.xml")

# Load the image to be analyzed
image: np.ndarray = cv2.imread("tests/assets/cheetah_person.jpg")
image = cv2.resize(image, dsize=(224, 224))
image = np.expand_dims(image, 0)

# Create the Explainer for the model
explainer = xai.Explainer(
    model=ov_model,  # accepts path arguments "mobilenet_v3.xml" or "mobilenet_v3.onnx" as well
    task=xai.Task.CLASSIFICATION,
)

# Generate saliency map for the label of interest
explanation: xai.Explanation = explainer(
    data=image,
    targets=293,  # (cheetah), accepts label indices or actual label names if label_names provided
    overlay=True,  # saliency map overlay over the input image, defaults to False
)

# Save saliency maps to output directory
explanation.save(dir_path="./output")
Original image Explained image
Oringinal images Explained image

We can see that model is focusing on the body or skin area of the animals to tell if this image contains actual cheetahs.

Insert XAI head to your models

Using the insert_xai API, we can insert XAI head to existing OpenVINO or PyTorch models directly and get additional "saliency_map" output in the same inference pipeline.

import torch
import timm

# Get a PyTorch model from TIMM
torch_model: torch.nn.Module = timm.create_model("resnet18.a1_in1k", in_chans=3, pretrained=True)

# Insert XAI head
model_xai: torch.nn.Module = xai.insert_xai(torch_model, xai.Task.CLASSIFICATION)

# Torch XAI model inference
model_xai.eval()
with torch.no_grad():
    outputs = model_xai(torch.from_numpy(image_norm))
    logits = outputs["prediction"]  # BxC
    saliency_maps = outputs["saliency_map"]  # BxCxHxW: per-class saliency map

More advanced use-cases

Users could tweak the basic use-case according to their purpose, which include but not limited to:

  • Select XAI mode (White-Box or Black-Box) or even specific method which are automatically decided by default
  • Provide custom model pre/post processing functions like resize and normalizations which the model expects
  • Customize output image visualization options
  • Explain multiple class targets, passing them as label indices or as actual label names
  • Call explainer multiple times to explain multiple images or to use different targets
  • Insert XAI head to your PyTorch models and export to ONNX format to generate saliency maps on ONNX Runtime (Refer to the full example script)

Please find more options and scenarios in the following links:

Playing with the examples

Please look around the runnable example scripts and play with them to get used to the Explainer and insert_xai APIs.

# Prepare models by running tests (need "pip install openvino_xai[dev]" extra option)
# Models are downloaded and stored in .data/otx_models
pytest tests/test_classification.py

# Run a bunch of classification examples
# All outputs will be stored in the corresponding output directory
python examples/run_classification.py .data/otx_models/mlc_mobilenetv3_large_voc.xml \
tests/assets/cheetah_person.jpg --output output

# Run PyTorch and ONNX support example
python examples/run_torch_onnx.py

Contributing

For those who would like to contribute to the library, please refer to the contribution guide for details.

Please let us know via the Issues tab if you have any issues, feature requests, or questions.

Thank you! We appreciate your support!


License

OpenVINO™ Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.


Disclaimer

Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openvino_xai-1.1.0.tar.gz (54.8 kB view details)

Uploaded Source

Built Distribution

openvino_xai-1.1.0-py3-none-any.whl (69.8 kB view details)

Uploaded Python 3

File details

Details for the file openvino_xai-1.1.0.tar.gz.

File metadata

  • Download URL: openvino_xai-1.1.0.tar.gz
  • Upload date:
  • Size: 54.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.10

File hashes

Hashes for openvino_xai-1.1.0.tar.gz
Algorithm Hash digest
SHA256 7034a2b86c15eccbc06fd2ca8ca05e1d8f9f71c9bb4b3aa6b4c5a8a11c0ffc5c
MD5 3b93b6857964d4d4fcb4e59df98b2396
BLAKE2b-256 344673fa23ba69cab1b9adc279d1f5794526fea3eaf1a7ae25a688ecff9f8d7d

See more details on using hashes here.

File details

Details for the file openvino_xai-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: openvino_xai-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 69.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.10

File hashes

Hashes for openvino_xai-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ce5c0529decb0d7fc208b07f0ba48bea16e3851bafc204f19ea7998817834cfd
MD5 7f534da664464417c29735f6180f3da2
BLAKE2b-256 d45f9244da2685b64e64843835b74da0353e72f8fd3bc27e14fc069692af0680

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page