Skip to main content

Benchmark interpretability methods.

Project description

BIM - Benchmark Interpretability Method

This repository contains dataset, models, and metrics for benchmarking interpretability methods (BIM) described in paper:

  • Title: "BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth"
  • Authors: Sherry (Mengjiao) Yang, Been Kim

Upon using this library, please cite:

@Article{BIM2019,
  title = {{BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth}},
  author = {Yang, Mengjiao and Kim, Been},
  year = {2019}
}

Setup

Run the following from the home directory of this repository to install python dependencies, download BIM models, download MSCOCO and MiniPlaces, and construct BIM dataset.

pip install bim
source scripts/download_models.sh
source scripts/download_datasets.sh
python scripts/construct_bim_dataset.py

Dataset

Images in data/obj and data/scene are the same but have object and scene labels respectively, as shown in the figure above. val_loc.txt records the top-left and bottom-right corner of the object and val_mask has the binary masks of the object in the validation set. Additional sets and their usage are described in the table below.

Name Training Validation Usage Description
obj 90,000 10,000 Model contrast Objects and scenes with object labels
scene 90,000 10,000 Model contrast & Input dependence Objects and scenes with scene labels
scene_only 90,000 10,000 Input dependence Scene-only images with scene labels
dog_bedroom - 200 Relative model contrast Dog in bedroom labeled as bedroom
bamboo_forest - 100 Input independence Scene-only images of bamboo forest
bamboo_forest_patch - 100 Input independence Bamboo forest with functionally insignificant dog patch

Models

Models in models/obj, models/scene, and models/scene_only are trained on data/obj, data/scene, and data/scene_only respectively. Models in models/scenei for i in {1...10} are trained on images where dog is added to i scene classes, and the rest scene classes do not contain any added objects. All models are in TensorFlow's SavedModel format.

Metrics

BIM metrics compare how interpretability methods perform across models (model contrast), across inputs to the same model (input dependence), and across functionally equivalent inputs (input independence).

Model contrast scores

Given images that contain both objects and scenes, model contrast measures the difference in attributions between the model trained on object labels and the model trained on scene labels.

Input dependence rate

Given a model trained on scene labels, input dependence measures the percentage of inputs where the addition of objects results in the region being attributed as less important.

Input independence rate

Given a model trained on scene-only images, input independence measures the percentage of inputs where a functionally insignificant patch (e.g., a dog) does not affect explanations significantly.

Evaluate saliency methods

To compute model contrast score (MCS) over randomly selected 10 images, you can run

python bim/metrics.py --metrics=MCS --num_imgs=10

To compute input dependence rate (IDR), change --metrics to IDR. To compute input independence rate (IIR), you need to first constructs a set of functionally insignificant patches by running

python scripts/construct_delta_patch.py

and then evaluate IIR by running

python bim/metrics.py --metrics=IIR --num_imgs=10

Evaluate TCAV

TCAV is a global concept attribution method whose MCS can be measured by comparing the TCAV scores of a particular object concept for the object model and the scene model. Run the following to compute the TCAV scores of the dog concept for the object model.

python bim/run_tcav.py --model=obj

Disclaimer

This is not an officially supported Google product.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bim-0.4.tar.gz (11.1 kB view details)

Uploaded Source

Built Distribution

bim-0.4-py3-none-any.whl (17.0 kB view details)

Uploaded Python 3

File details

Details for the file bim-0.4.tar.gz.

File metadata

  • Download URL: bim-0.4.tar.gz
  • Upload date:
  • Size: 11.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.6

File hashes

Hashes for bim-0.4.tar.gz
Algorithm Hash digest
SHA256 8b13bdc128e92f3eb52f03919aa90e6df0bcedde980906387bd38dfd65a9a327
MD5 84425e7a6c8a09971f52a84c78c4ea06
BLAKE2b-256 279ec21cf240a8f36b7690327fdd3ca3a8d37aaa01217a7590c57f2a28c45f73

See more details on using hashes here.

File details

Details for the file bim-0.4-py3-none-any.whl.

File metadata

  • Download URL: bim-0.4-py3-none-any.whl
  • Upload date:
  • Size: 17.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.6

File hashes

Hashes for bim-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 57bf87164a3eeb4810b8b3d1ef879b9211c96a90d44d78c3c45bfbfc59b17079
MD5 fc599c94d5436f63cdf33cc65b7a755d
BLAKE2b-256 4c4244067c7e08fcc42b9897efcce27db106740d6c5c20fc9aa29c60e74dbeac

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page