Skip to main content

LeGrad

Project description

LeGrad

An Explainability Method for Vision Transformers via Feature Formation Sensitivity

Walid Bousselham, Angie Boggust, Sofian Chaybouti,Hendrik Strobelt Hilde Kuehne

Vision-Language foundation models have shown remarkable performance in various zero-shot settings such as image retrieval, classification, or captioning. we propose LeGrad, an explainability method specifically designed for ViTs. We LeGrad we explore how the decision-making process of such models by leveraging their feature formation process. A by-product of understanding VL models decision-making is the ability to produce localised heatmap for any text prompt.

The following is the code for a wrapper around the OpenCLIP library to equip VL models with LeGrad.

:hammer: Installation

legrad library can be simply installed via pip:

$ pip install legrad_torch

Demo

To run the gradio app locally, first install gradio and then run app.py:

$ pip install gradio
$ python app.py

Usage

To see which pretrained models is available use the following code snippet:

import legrad
legrad.list_pretrained()

Single Image

To process an image and a text prompt use the following code snippet:

Note: the wrapper does not affect the original model, hence all the functionalities of OpenCLIP models can be used seamlessly.

import requests
from PIL import Image
import open_clip
import torch

from legrad import LeWrapper, LePreprocess
from utils import visualize

# ------- model's paramters -------
model_name = 'ViT-B-16'
pretrained = 'laion2b_s34b_b88k'
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# ------- init model -------
model, _, preprocess = open_clip.create_model_and_transforms(
    model_name=model_name, pretrained=pretrained, device=device)
tokenizer = open_clip.get_tokenizer(model_name=model_name)
model.eval()
# ------- Equip the model with LeGrad -------
model = LeWrapper(model)
# ___ (Optional): Wrapper for Higher-Res input image ___
preprocess = LePreprocess(preprocess=preprocess, image_size=448)

# ------- init inputs: image + text -------
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = preprocess(Image.open(requests.get(url, stream=True).raw)).unsqueeze(0).to(device)
text = tokenizer(['a photo of a cat']).to(device)

# -------
text_embedding = model.encode_text(text, normalize=True)
print(image.shape)
explainability_map = model.compute_legrad_clip(image=image, text_embedding=text_embedding)

# ___ (Optional): Visualize overlay of the image + heatmap ___
visualize(heatmaps=explainability_map, image=image)

:star: Acknowledgement

This code is build as wrapper around OpenCLIP library from LAION, visit their repo for more vision-language models. This project also takes inspiration from Transformer-MM-Explainability and the timm library, please visit their repository.

:books: Citation

If you find this repository useful, please consider citing our work :pencil: and giving a star :star2: :

@article{bousselham2024legrad,
  author    = {Bousselham, Walid and Boggust, Angie and Chaybouti, Sofian and Strobelt, Hendrik and Kuehne, Hilde}
  title     = {LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity},
  journal   = {arXiv},
  year      = {2024},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

legrad_torch-1.0.tar.gz (15.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

legrad_torch-1.0-py3-none-any.whl (13.9 kB view details)

Uploaded Python 3

File details

Details for the file legrad_torch-1.0.tar.gz.

File metadata

  • Download URL: legrad_torch-1.0.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.12

File hashes

Hashes for legrad_torch-1.0.tar.gz
Algorithm Hash digest
SHA256 67191c00dc56631bd1e2ca8d264f05366bc4d5020d1a0146fa36e48a07312d45
MD5 6dd656929afb3965439bd6dc6ce5a0d3
BLAKE2b-256 09c67778bdab8384db4c787b8148cc378da384323bf24ec28eb8e64a6fb872ff

See more details on using hashes here.

File details

Details for the file legrad_torch-1.0-py3-none-any.whl.

File metadata

  • Download URL: legrad_torch-1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.12

File hashes

Hashes for legrad_torch-1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 56cf378ab78c332a5f667235a591745adf40d4706c7e37990d3d01a73163847f
MD5 61ee3478bafea7e73511db3fa5250366
BLAKE2b-256 9fa5117d7d280926a76e48ddb36b40fb1e5b1b15dd1e0e2d25cac88180c57c07

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page