Skip to main content

Model Interpretability for PyTorch.

Project description

FoXAI

FoXAI simplifies the application of eXplainable AI algorithms to explain the performance of neural network models during training. The library acts as an aggregator of existing libraries with implementations of various XAI algorithms and seeks to facilitate and popularize their use in machine learning projects.

Currently, only algorithms related to computer vision are supported, but we plan to add support for text, tabular and multimodal data problems in the future.

Table of content:

Installation

Installation requirements:

  • Python >=3.7.2,<3.11

Important: For any problems regarding installation we advise to refer first to our FAQ.

GPU acceleration

To use the torch library with GPU acceleration, you need to install a dedicated version of torch with support for the installed version of CUDA drivers in the version supported by the library, at the moment torch>=1.12.1,<2.0.0. A list of torch wheels with CUDA support can be found at https://download.pytorch.org/whl/torch/.

Manual installation

If you would like to install from the source you can build a wheel package using poetry. The assumption is that the poetry package is installed. You can find how to install poetry here. To build wheel package run:

git clone https://github.com/softwaremill/FoXAI.git
cd FoXAI/
poetry install
poetry build

As a result you will get wheel file inside dist/ directory that you can install via pip:

pip install dist/foxai-x.y.z-py3-none-any.whl

Getting started

To use the FoXAI library in your ML project, simply add an additional object of type WandBCallback to the Trainer's callback list from the pytorch-lightning library. Currently, only the Weights and Biases tool for tracking experiments is supported.

Below is a code snippet from the example (example/mnist_wandb.py):

import torch
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import WandbLogger

import wandb
from foxai.callbacks.wandb_callback import WandBCallback
from foxai.context_manager import CVClassificationExplainers, ExplainerWithParams

    ...
    wandb.login()
    wandb_logger = WandbLogger(project=project_name, log_model="all")
    callback = WandBCallback(
        wandb_logger=wandb_logger,
        explainers=[
            ExplainerWithParams(
                explainer_name=CVClassificationExplainers.CV_INTEGRATED_GRADIENTS_EXPLAINER
            ),
            ExplainerWithParams(
                explainer_name=CVClassificationExplainers.CV_GRADIENT_SHAP_EXPLAINER
            ),
        ],
        idx_to_label={index: index for index in range(0, 10)},
    )
    model = LitMNIST()
    trainer = Trainer(
        accelerator="gpu",
        devices=1 if torch.cuda.is_available() else None,
        max_epochs=max_epochs,
        logger=wandb_logger,
        callbacks=[callback],
    )
    trainer.fit(model)

CLI

A CLI tool is available to update the artifacts of an experiment tracked in Weights and Biases. Allows you to create XAI explanations and send them to W&B offline. This tool is using hydra to handle the configuration of yaml files. To check options type:

foxai-wandb-updater --help

Typical usage with configuration in config/config.yaml:

foxai-wandb-updater --config-dir config/ --config-name config

Content of config.yaml:

username: <WANDB_USERANEM>
experiment: <WANDB_EXPERIMENT>
run_id: <WANDB_RUN_ID>
classifier: # model class to explain
  _target_: example.streamlit_app.mnist_model.LitMNIST
  batch_size: 1
  data_dir: "."
explainers: # list of explainers to use
 - explainer_with_params:
    explainer_name: CV_GRADIENT_SHAP_EXPLAINER
    kwargs:
      n_steps: 1000

Development

Requirements

The project was tested using Python version 3.8.

CUDA

The recommended version of CUDA is 10.2 as it is supported since version 1.5.0 of torch. You can check the compatibility of your CUDA version with the current version of torch: https://pytorch.org/get-started/previous-versions/.

As our starting Docker image we were using the one provided by Nvidia: nvidia/cuda:10.2-devel-ubuntu18.04.

If you wish an easy to use docker image we advise to use our Dockerfile.

pyenv

Optional step, but probably one of the easiest way to actually get Python version with all the needed aditional tools (e.g. pip).

pyenv is a tool used to manage multiple versions of Python. To install this package follow the instructions on the project repository page: https://github.com/pyenv/pyenv#installation. After installation You can install desired Python version, e.g. 3.8.16:

pyenv install 3.8.16

The next step is required to be able to use a desired version of Python with poetry. To activate a specific version of Python interpreter execute the command:

pyenv local 3.8.16 # or `pyenv global 3.8.16`

Inside the repository with poetry You can select a specific version of Python interpreter with the command:

poetry env use 3.8.16

After changing the interpreter version You have to once again install all dependencies:

poetry install

Poetry

To separate runtime environments for different services and repositories, it is recommended to use a virtual Python environment. You can configure Poetry to create a new virtual environment in the project directory of every repository. To install Poetry follow the instruction at https://python-poetry.org/docs/#installing-with-the-official-installer. We are using Poetry in version 1.2.1. To install a specific version You have to provide desired package version:

curl -sSL https://install.python-poetry.org | POETRY_VERSION=1.2.1 python3 -

Add poetry to PATH:

export PATH="/home/ubuntu/.local/bin:$PATH"
echo 'export PATH="/home/ubuntu/.local/bin:$PATH"' >> ~/.bashrc

After installation, configure the creation of virtual environments in the directory of the project.

poetry config virtualenvs.create true
poetry config virtualenvs.in-project true

The final step is to install all the dependencies defined in the pyproject.toml file.

poetry install

Once all the steps have been completed, the environment is ready to go. A virtual environment by default will be created with the name .venv inside the project directory.

Pre-commit hooks setup

To improve the development experience, please make sure to install our pre-commit hooks as the very first step after cloning the repository:

poetry run pre-commit install

Note


At the moment only explainable algorithms for image classification are implemented. In the future more algorithms and more computer vision tasks will be introduced. In the end, the module should work with all types of tasks (NLP, etc.).

Examples

In example/notebooks/ directory You can find notebooks with example usage of this framework. Scripts in example/ directory contain samples of training models using different callbacks.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

foxai-0.6.0.tar.gz (38.5 kB view details)

Uploaded Source

Built Distribution

foxai-0.6.0-py3-none-any.whl (62.4 kB view details)

Uploaded Python 3

File details

Details for the file foxai-0.6.0.tar.gz.

File metadata

  • Download URL: foxai-0.6.0.tar.gz
  • Upload date:
  • Size: 38.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.9.16 Linux/5.19.0-41-generic

File hashes

Hashes for foxai-0.6.0.tar.gz
Algorithm Hash digest
SHA256 42b19a951126de23eb07098c56dc4bceef291c13ae371fca26b4dcfb3785465b
MD5 20fe08dabe621b1f7db1f569f71383d3
BLAKE2b-256 084fbc9c2a1caa5526cb00eacd69aae871d2d427dc1c9303bf763b56dae76113

See more details on using hashes here.

File details

Details for the file foxai-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: foxai-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 62.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.9.16 Linux/5.19.0-41-generic

File hashes

Hashes for foxai-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bdd92772812632cf32d3d1a5c224fb5eb3c725c98524a0f5e79e32f692343912
MD5 69dc7b97e8eed81905ceaa4680e1c75e
BLAKE2b-256 9d7f145918bdb2983f384298a67c1c7d5b6afd04d59dc1fbd33eadfbc3b40325

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page