This repository implements the faithfulness metrics mentionned in the paper --Computing and evaluating saliency maps for image classification: a tutorial-- in Pytorch.
Project description
Faithfulness metrics for saliency maps
TODO: explain the role of each script
This repository implements the faithfulness metrics mentionned in the paper "Computing and evaluating saliency maps for image classification: a tutorial" in Pytorch. This can be used to compute the following metrics:
- Deletion Area Under Curve (DAUC)/Insertion Area Under Curve (IAUC) (Petsiuk et al. 2019)
- Deletion Correlation (DC)/Insertion Correlation (IC) (Gomez et al. 2022)
- Increase In Confidence (IIC)/Average Drop (AD) (Chattopadhyay et al. 2017)
- Avereage Drop in Deletion (ADD) (Jung et al.)
Single step metrics
This section covers the use of the IIC, AD and ADD metrics. First, generate the saliency map of the image:
saliency_map = gradcam.attribute(img,class_ind)
Then, compute the metric:
iic = IIC()
iic_mean = iic(model,data,explanations,class_to_explain)
The __call__()
method for all the metrics requires the following arguments :
model
: atorch.nn.Module
that outputs a score tensor of shape (NxC), on which a softmax activation has been applied.data
: the input image tensor of shape (Nx3xHxW).explanations
: the saliency maps tensor of shape (Nx1xH'xW')class_to_explain
: The index of the class to explain for each input image. The shape shoud be (N).
The value returned by this method is simply the average value of the metric over all the images.
Multi-step metrics
This section covers the use of the DAUC, IAUC, DC and IC metrics. These metrics work similarly but some argument have to be passed to the constructor:
dauc = DAUC(data_shape,explanation_shape,bound_max_step=True)
Where data_shape
and explanation_shape
are the shape of the image and saliency map tensor.
A high resolution saliency map of size 56x56 would require approximately 3k inferences.
To prevent too much computation, you can set the bound_max_step
argument to True to limits the amount of computation that can be computed.
More precisly, this argument forces to mask/reveal several pixels before computing a new inference if the resolution of the saliency map is superior to 14x14.
The metric is then computed the same way as the single step metrics:
dauc_mean = dauc(model,data,explanations,class_to_explain)
Demonstration
Look at the demo.ipynb
script for a demonstration.
If you want to re-run the demo, you should download the model's weights pretrained on the CUB dataset and put it on the project's root.
You should also download the CUB test dataset and put it in a "data" folder located at the project's root.
The dataset should be formated as expected by the torchvision.datasets.ImageFolder
class from torchvision
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for saliency_maps_metrics-0.0.5.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | ff6ae5697f2a75ee9f5f14d0eb53e2702c653075f4f3687a0c1d4dce6d2b8686 |
|
MD5 | 18358a759785bc0cb1f80512fc8e5653 |
|
BLAKE2b-256 | c6114f1edd99ec39715cc1ba2c405954cd5d01389e90e8c7fb945b82d214c9c6 |
Hashes for saliency_maps_metrics-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c23aea1eb9f6650f75707a9b69cc5dca7e20cdfec80583f65820af866be24593 |
|
MD5 | b1232be30e8a96a21444c0686a963734 |
|
BLAKE2b-256 | dca98a382c16bb2108b687abe684ee008b1c66a31c58945e304a9c9b7c1834bf |