MetaQuantus is a XAI performance tool for identifying reliable metrics.
Project description
A XAI performance tool for identifying reliable metrics
PyTorch
This repository contains the code and experimental results for the paper The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
MetaQuantus is currently under active development. Carefully note the release version to ensure reproducibility of your work.
Table of Contents
Motivation
In Explainable AI (XAI), the problem of meta-evaluation (i.e., the process of evaluating the evaluation method itself) arises as we select and quantitatively compare explanation methods for a given model, dataset and task---where the use of multiple metrics or evaluation techniques oftentimes lead to conflicting results. For example, scores from different metrics vary, both in range and direction, with lower or higher scores indicating higher quality explanations, making it difficult for practitioners to interpret the scores and select the best explanation method.
As illustrated in the Figure below, the two metrics, Faithfulness Correlation (FC) (Bhatt et al., 2020) and Pixel-Flipping (PF) (Bach et al., 2015) rank the same explanation methods differently. For example, the Gradient method (Mørch et al., 1995) (Baehrens et al., 2010) is both ranked the highest (R=1) and the lowest (R=3) depending on the metric used. From a practitioner's perspective, this causes confusion.
With MetaQuantus, we address the problem of meta-evaluation by providing a simple yet comprehensive framework that evaluates metrics against two failure modes: resilience to noise (NR) and reactivity to adversaries (AR). In a similar way that software systems undergo vulnerability and penetration tests before deployment, this open-sourced tool is designed to stress test evalaution methods (e.g., as provided by Quantus).
Library
MetaQuantus is an open-source, general-purpose tool that serves as a development tool for XAI researchers and Machine Learning (ML) practitioners to verify and benchmark newly constructed metrics (i.e., ``quality estimators''). It offers an easy-to-use API that simplifies metric selection such that the explanation method selection in XAI can be performed more reliably, with minimal code. MetaQuantus includes:
- A series of pre-built tests such as
ModelPerturbationTest
andInputPertubrationTest
that can be applied to various metrics - Supporting source code such as for plotting and analysis
- Various tutorials e.g., Getting-Started-with-MetaQuantus and Reproduce-Experiments
Installation
The simplest way to install MetaQuantus is to download a local copy (and then, access the folder):
git clone https://github.com/anonymous/MetaQuantus.git
cd MetaQuantus
And then install it locally:
pip install -e .
Alternatively, you can simply install MetaQuantus with requirements.txt.
pip install -r requirements.txt
Note that the installation requires that PyTorch is already installed on your machine.
Package requirements
The package requirements are as follows:
python>=3.7.0
pytorch>=1.10.1
quantus>=0.3.2
captum>=0.4.1
Getting started
Please see
Tutorial-Getting-Started-with-MetaQuantus.ipynb under tutorials/
folder to run code similar to the example given above. Note that PyTorch framework and the XAI evalaution library Quantus is needed to run MetaQuantus
.
MetaQuantus methodology
Meta-evaluation of quality estimators is performed in 3 steps: (1) Perturbing, (2) Scoring and (3) Integrating.
- Perturbing. A minor or disruptive perturbation is induced depending on the failure mode: NR or AR.
- Scoring. To assess each performance dimension, the estimator’s IAC and IEC scores are calculated.
- Integrating. We combine the IAC and IEC scores to produce an MC score that summarises the estimator’s performance.
Reproduce the experiments
To reproduce the results of this paper, you will need to follow these steps:
-
Dataset Generation: Run the notebook Tutorial-Data-Generation-Experiments.ipynb to generate the necessary data for the experiments. This notebook will guide you through the process of downloading and preprocessing the data in order to save it to approriate test sets.
-
Results Analysis: Once the dataset generation step is complete, run the Tutorial-Reproduce-Experiments.ipynb to produce and analyse the results. Inside the notebook, for each experiment, we will describe which python scripts to run in order to obtain the results. All these python files are located in the
scripts/
folder. Please note that the results may slightly vary depending on the random seed and other hyperparameters, but the overall trends and conclusions should remain the same.
For both steps, make sure to adjust local paths so that the approriate files can be retrieved including having all the necessary packages installed. Ensure to have GPUs enabled throughout the computing as this will speed up the experimentation considerably.
More details on how to run the scripts for step 2.
In the second step, you have to run the python scripts for the respective experiment as listed below (it is also referenced in the notebook). Feel free to change the hyperparameters if you want to run similar experiments on other explanation methods, datasets or models.
Test: Run a simple test that meta-evaluation work.
python3 run_test.py --dataset=ImageNet --K=3 --iters=2
Application: Run the benchmarking experiments (also used for category convergence analysis).
python3 run_benchmarking.py --dataset=MNIST --fname=f --K=5 --iters=3
python3 run_benchmarking.py --dataset=fMNIST --fname=f --K=5 --iters=3
python3 run_benchmarking.py --dataset=cMNIST --fname=f --K=5 --iters=3
Application: Run hyperparameter optimisation experiment.
python3 run_hp.py --dataset=MNIST --K=3 --iters=2
python3 run_hp.py --dataset=ImageNet --K=3 --iters=2
Experiment: Run the faithfulness ranking disagreement exercise.
python3 run_ranking.py --dataset=cMNIST --fname=f --K=5 --iters=3 --category=Faithfulness
Sanity-Check: Run sanity-checking exercise: L dependency.
python3 run_l_dependency.py --dataset=MNIST --K=5 --iters=3
python3 run_l_dependency.py --dataset=fMNIST --K=5 --iters=3
python3 run_l_dependency.py --dataset=cMNIST --K=5 --iters=3
Sanity-Check: Run sanity-checking exercise: adversarial estimators.
python3 run_hp.py --dataset=MNIST --K=3 --iters=2
python3 run_sanity_checks.py --dataset=ImageNet --K=3 --iters=2
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for MetaQuantus-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 17979e3b0a10daeeb907f4bbb09caf9f1d75c468937844d4bf6dcfec8c822e66 |
|
MD5 | 142642b799e9fba971ddf4c8a29ce980 |
|
BLAKE2b-256 | e05fa0e4ef3602b1ecead9961ce05b4f64bb03a5e951185abb6f1b6f9030ccd6 |