Skip to main content

Benchmarking attribution methods.

Project description

BAM - Benchmarking Attribution Methods

This repository contains dataset, models, and metrics for benchmarking attribution methods (BAM) described in paper Benchmarking Attribution Methods with Relative Feature Importance. Upon using this library, please cite:

@Article{BAM2019,
  title = {{Benchmarking Attribution Methods with Relative Feature Importance}},
  author = {Yang, Mengjiao and Kim, Been},
  journal   = {CoRR},
  volume    = {abs/1907.09701},
  year = {2019}
}

Setup

Run the following from the home directory of this repository to install python dependencies, download BAM models, download MSCOCO and MiniPlaces, and construct BAM dataset.

pip install bam-intp
source scripts/download_models.sh
source scripts/download_datasets.sh
python scripts/construct_bam_dataset.py

Dataset

Images in data/obj and data/scene are the same but have object and scene labels respectively, as shown in the figure above. val_loc.txt records the top-left and bottom-right corner of the object and val_mask has the binary masks of the object in the validation set. Additional sets and their usage are described in the table below.

Name Training Validation Usage Description
obj 90,000 10,000 Model contrast Objects and scenes with object labels
scene 90,000 10,000 Model contrast & Input dependence Objects and scenes with scene labels
scene_only 90,000 10,000 Input dependence Scene-only images with scene labels
dog_bedroom - 200 Relative model contrast Dog in bedroom labeled as bedroom
bamboo_forest - 100 Input independence Scene-only images of bamboo forest
bamboo_forest_patch - 100 Input independence Bamboo forest with functionally insignificant dog patch

Models

Models in models/obj, models/scene, and models/scene_only are trained on data/obj, data/scene, and data/scene_only respectively. Models in models/scenei for i in {1...10} are trained on images where dog is added to i scene classes, and the rest scene classes do not contain any added objects. All models are in TensorFlow's SavedModel format.

Metrics

BAM metrics compare how interpretability methods perform across models (model contrast), across inputs to the same model (input dependence), and across functionally equivalent inputs (input independence).

Model contrast scores

Given images that contain both objects and scenes, model contrast measures the difference in attributions between the model trained on object labels and the model trained on scene labels.

Input dependence rate

Given a model trained on scene labels, input dependence measures the percentage of inputs where the addition of objects results in the region being attributed as less important.

Input independence rate

Given a model trained on scene-only images, input independence measures the percentage of inputs where a functionally insignificant patch (e.g., a dog) does not affect explanations significantly.

Evaluate saliency methods

To compute model contrast score (MCS) over randomly selected 10 images, you can run

python bam/metrics.py --metrics=MCS --num_imgs=10

To compute input dependence rate (IDR), change --metrics to IDR. To compute input independence rate (IIR), you need to first constructs a set of functionally insignificant patches by running

python scripts/construct_delta_patch.py

and then evaluate IIR by running

python bam/metrics.py --metrics=IIR --num_imgs=10

Evaluate TCAV

TCAV is a global concept attribution method whose MCS can be measured by comparing the TCAV scores of a particular object concept for the object model and the scene model. Run the following to compute the TCAV scores of the dog concept for the object model.

python bam/run_tcav.py --model=obj

Disclaimer

This is not an officially supported Google product.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bam-intp-0.1.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

bam_intp-0.1-py2-none-any.whl (17.5 kB view details)

Uploaded Python 2

File details

Details for the file bam-intp-0.1.tar.gz.

File metadata

  • Download URL: bam-intp-0.1.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/2.7.17rc1

File hashes

Hashes for bam-intp-0.1.tar.gz
Algorithm Hash digest
SHA256 c7dec94da341bad4818a793c095e619ef5dc678cb9f8c577a2ca049b27cc30e3
MD5 778e1f3388a9e880845e93fe71dd1a22
BLAKE2b-256 e1e6ae10032a456f52d9bc4c83c66d9ad809ec869a4c1723a08d6196de5b9973

See more details on using hashes here.

File details

Details for the file bam_intp-0.1-py2-none-any.whl.

File metadata

  • Download URL: bam_intp-0.1-py2-none-any.whl
  • Upload date:
  • Size: 17.5 kB
  • Tags: Python 2
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/2.7.17rc1

File hashes

Hashes for bam_intp-0.1-py2-none-any.whl
Algorithm Hash digest
SHA256 189ab7c5719c80824cc9965b8cd18ff4c2d9ec9059bb2549ebd6e72ca861f4f4
MD5 5e713f6c1526cadae91b183e85bf6245
BLAKE2b-256 3a518de62e36ac7e2fa50cddd6baf60d89539a9472f25636b8978b889a144df6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page