Skip to main content

Toolkit for quantitative evaluation of data attribution methods in PyTorch.

Project description

quanda

Toolkit for quantitative evaluation of data attribution methods in PyTorch.

py_versions Code style: black mypy flake8

quanda is currently under active development. Note the release version to ensure reproducibility of your work. Expect changes to API.

🐼 Library overview

Training data attribution (TDA) methods attribute model output on a specific test sample to the training dataset that it was trained on. They reveal the training datapoints responsible for the model's decisions. Existing methods achieve this by estimating the counterfactual effect of removing datapoints from the training set (Koh and Liang, 2017; Park et al., 2023; Bae et al., 2024) tracking the contributions of training points to the loss reduction throughout training (Pruthi et al., 2020), using interpretable surrogate models (Yeh et al., 2018) or finding training samples that are deemed similar to the test sample by the model (Caruana et. al, 1999; Hanawa et. al, 2021). In addition to model understanding, TDA has been used in a variety of applications such as debugging model behavior (Koh and Liang, 2017; Yeh et al., 2018; K and Søgaard, 2021; Guo et al., 2021), data summarization (Khanna et al., 2019; Marion et al., 2023; Yang et al., 2023), dataset selection (Engstrom et al., 2024; Chhabra et al., 2024), fact tracing (Akyurek et al., 2022) and machine unlearning (Warnecke et al., 2023).

Although there are various demonstrations of TDA’s potential for interpretability and practical applications, the critical question of how TDA methods should be effectively evaluated remains open. Several approaches have been proposed by the community, which can be categorized into three groups:

Ground TruthAs some of the methods are designed to approximate LOO effects, ground truth can often be computed for TDA evaluation. However, this counterfactual ground truth approach requires retraining the model multiple times on different subsets of the training data, which quickly becomes computationally expensive. Additionally, this ground truth is shown to be dominated by noise in practical deep learning settings, due to the inherent stochasticity of a typical training process (Basu et al., 2021; Nguyen et al., 2023).
Downstream Task EvaluatorsTo remedy the challenges associated with ground truth evaluation, the literature proposes to assess the utility of a TDA method within the context of an end-task, such as model debugging or data selection (Koh and Liang, 2017; Khanna et al., 2019; Karthikeyan et al., 2021).
HeuristicsFinally, the community also used heuristics (desirable properties or sanity checks) to evaluate the quality of TDA techniques. These include comparing the attributions of a trained model and a randomized model (Hanawa et al., 2021) and measuring the amount of overlap between the attributions for different test samples (Barshan et al., 2020).

quanda is designed to meet the need of a comprehensive and systematic evaluation framework, allowing practitioners and researchers to obtain a detailed view of the performance of TDA methods in various contexts.

Library Features

  • Unified TDA Interface: quanda provides a unified interface for various TDA methods, allowing users to easily switch between different methods.
  • Metrics: quanda provides a set of metrics to evaluate the effectiveness of TDA methods. These metrics are based on the latest research in the field.
  • Benchmarking: quanda provides a benchmarking tool to evaluate the performance of TDA methods on a given model, dataset and problem. As many TDA evaluation methods require access to ground truth, our benchmarking tools allow to generate a controlled setting with ground truth, and then compare the performance of different TDA methods on this setting.

Supported TDA Methods

Method Name Repository Reference
Similarity Influence Captum Caruana et al., 1999
Arnoldi Influence Function Captum Schioppa et al., 2022; Koh and Liang, 2017
TracIn Captum Pruthi et al., 2020
TRAK TRAK Park et al., 2023
Representer Point Selection Representer Point Selection Yeh et al., 2018

Metrics

  • Linear Datamodeling Score (Park et al., 2023): Measures the correlation between the (grouped) attribution scores and the actual output of models trained on different subsets of the training set. For each subset, the linear datamodeling score compares the actual model output to the sum of attribution scores from the subset using Spearman rank correlation.

  • Identical Class / Identical Subclass (Hanawa et al., 2021): Measures the proportion of identical classes or subclasses in the top-1 training samples over the test dataset. If the attributions are based on similarity, they are expected to be predictive of the class of the test datapoint, as well as different subclasses under a single label.

  • Model Randomization (Hanawa et al., 2021): Measures the correlation between the original TDA and the TDA of a model with randomized weights. Since the attributions are expected to depend on model parameters, the correlation between original and randomized attributions should be low.

  • Top-K Cardinality (Barshan et al., 2020): Measures the cardinality of the union of the top-K training samples. Since the attributions are expected to be dependent on the test input, they are expected to vary heavily for different test points, resulting in a low overlap (high metric value).

  • Mislabeled Data Detection (Koh and Liang, 2017): Computes the proportion of noisy training labels detected as a function of the percentage of inspected training samples. The samples are inspected in order according to their global TDA ranking, which is computed using local attributions. This produces a cumulative mislabeling detection curve. We expect to see a curve that rapidly increases as we check more of the training data, thus we compute the area under this curve

  • Shortcut Detection (Koh and Liang, 2017): Assuming a known shortcut, or Clever-Hans effect has been identified in the model, this metric evaluates how effectively a TDA method can identify shortcut samples as the most influential in predicting cases with the shortcut artifact. This process is referred to as Domain Mismatch Debugging in the original paper.

  • Mixed Datasets (Hammoudeh and Lowd, 2022): In a setting where a model has been trained on two datasets: a clean dataset (e.g. CIFAR-10) and an adversarial (e.g. zeros from MNIST), this metric evaluates how well the model ranks the importance (attribution) of adversarial samples compared to clean samples when making predictions on an adversarial example.

Benchmarks

quanda comes with a few pre-computed benchmarks that can be conveniently used for evaluation in a plug-and-play manner. We are planning to significantly expand the number of benchmarks in the future. The following benchmarks are currently available:

Benchmark Modality Model Metric Type
mnist_top_k_cardinality Vision MNIST TopKCardinalityMetric Heuristic
mnist_mixed_datasets MixedDatasetsMetric Heuristic
mnist_class_detection ClassDetectionMetric Downstream-Task-Evaluator
mnist_subclass_detection SubclassDetectionMetric Downstream-Task-Evaluator
mnist_mislabeling_detection MislabelingDetectionMetric Downstream-Task-Evaluator
mnist_shortcut_detection ShortcutDetectionMetric Downstream-Task-Evaluator
mnist_linear_datamodeling_score LinearDatamodelingMetric Ground Truth

🔬 Getting Started

Installation

To install the latest release of quanda use:

pip install git+https://github.com/dilyabareeva/quanda.git

quanda requires Python 3.7 or later. It is recommended to use a virtual environment to install the package.

Usage

In the following, we provide a quick guide to quanda usage. To begin using quanda, ensure you have the following:

  • Trained PyTorch Model (model): A PyTorch model that has already been trained on a relevant dataset. As a placeholder, we used the layer name "avgpool" below. Please replace it with the name of one of the layers in your model.
  • PyTorch Dataset (train_set): The dataset used during the training of the model.
  • Test Batches (test_tensor) and Explanation Targets (target): A batch of test data (test_tensor) and the corresponding explanation targets (target). Generally, it is advisable to use the model's predicted labels as the targets. In the following, we assume the existence of a torch.utils.data.DataLoader to load the test data in batches, with variable name test_loader.

In the following usage examples, we will be using the SimilarityInfluence data attribution from Captum.

Metrics Usage

Next, we demonstrate how to evaluate explanations using the Model Randomization metric.

Step 1. Import dependencies and library components
import torch
from torch.utils.data import DataLoader
from tqdm import tqdm

from quanda.explainers.wrappers import captum_similarity_explain, CaptumSimilarity
from quanda.metrics.heuristics import ModelRandomizationMetric
Step 2. Create the explainer object

We now create our explainer. The device to be used by the explainer and metrics is inherited from the model, thus we set the model device explicitly.

model.to(DEVICE)

explainer_kwargs = {
    "layers": "avgpool",
    "model_id": "default_model_id",
    "cache_dir": "./cache"
}
explainer = CaptumSimilarity(
    model=model,
    train_dataset=train_set,
    **explainer_kwargs
)
Step 3. Initialize the metric

The ModelRandomizationMetric needs to instantiate a new explainer to generate explanations for a randomized model. These will be compared with the explanations of the original model. Therefore, explainer_cls is passed directly to the metric along with initialization parameters of the explainer.

explainer_kwargs = {
    "layers": "avgpool",
    "model_id": "randomized_model_id",
    "cache_dir": "./cache"
}
model_rand = ModelRandomizationMetric(
    model=model,
    train_dataset=train_set,
    explainer_cls=CaptumSimilarity,
    expl_kwargs=explainer_kwargs,
    correlation_fn="spearman",
    seed=42,
)
Step 4. Iterate over test set and feed tensor batches first to explain, then to metric
for i, (test_tensor, target) in enumerate(tqdm(test_loader)):
    test_tensor, target = test_tensor.to(DEVICE), target.to(DEVICE)
    tda = explainer.explain(
        test_tensor=test_tensor,
        targets=target
    )
    model_rand.update(test_data=test_tensor, explanations=tda, explanation_targets=target)

print("Model heuristics metric output:", model_rand.compute())

Benchmarks Usage

The pre-assembled benchmarks allow us to streamline the evaluation process by downloading the necessary data and models, and running the evaluation in a single command. Step 1 and Step 2 from the previous section are still required to be executed before running the benchmark. The following code demonstrates how to use the mnist_subclass_detection benchmark:

Step 3. Load a pre-assembled benchmark and score an explainer
subclass_detect = SubclassDetection.download(
    name=`mnist_subclass_detection`,
    cache_dir=cache_dir,
    device="cpu",
)
score = dst_eval.evaluate(
    explainer_cls=CaptumSimilarity,
    expl_kwargs=explain_fn_kwargs,
    batch_size=batch_size,
)["score"]
print(f"Subclass Detection Score: {score}")

More detailed examples can be found in the tutorials folder.

Custom Explainers

In addition to the built-in explainers, quanda supports the evaluatioon of custom explainer methods. This section provides a guide on how to create a wrapper for a custom explainer that matches our interface.

Step 1. Create an explainer class

Your custom explainer should inherit from the base Explainer class provided by quanda. The first step is to initialize your custom explainer within the __init__ method.

from quanda.explainers.base import Explainer

class CustomExplainer(Explainer):
    def __init__(self, model, train_dataset, **kwargs):
        super().__init__(model, train_dataset, **kwargs)
        # Initialize your explainer here
Step 2. Implement the explain method

The core of your wrapper is the explain method. This function should take test samples and their corresponding target values as input and return a 2D tensor containing the influence scores.

  • test: The test batch for which explanations are generated.
  • targets: The target values for the explanations.

Ensure that the output tensor has the shape (test_samples, train_samples), where the entries in the train samples dimension are ordered in the same order as in the train_dataset that is being attributed.

def explain(
  self,
  test_tensor: torch.Tensor,
  targets: Union[List[int], torch.Tensor]
) -> torch.Tensor:
    # Compute your influence scores here
    return influence_scores
Step 3. Implement the self_influence method (Optional)

By default, quanda includes a built-in method for calculating self-influence scores. This base implementation computes all attributions over the training dataset, and collects the diagonal values in the attribution matrix. However, you can override this method to provide a more efficient implementation. This method should calculate how much each training sample influences itself and return a tensor of the computed self-influence scores.

def self_influence(self, batch_size: int = 1) -> torch.Tensor:
    # Compute your self-influence scores here
    return self_influence_scores

For detailed examples, we refer to the existing explainer wrappers in quanda.

⚠️ Usage Tips and Caveats

  • Controlled Setting Evaluation: Many metrics require access to ground truth labels for datasets, such as the indices of the "shorcut samples" in the Shortcut Detection metric, or the mislabeling (noisy) label indices for the Mislabeling Detection Metric. However, users often may not have access to these labels. To address this, we recommend either using one of our pre-built benchmark suites (see Benchmarks section) or generating (generate method) a custom benchmark for comparing explainers. Benchmarks provide a controlled environment for systematic evaluation.

  • Caching: Many explainers in our library generate re-usable cache. The cache_id and model_id parameters passed to various class instances are used to store these intermediary results. Ensure each experiment is assigned a unique combination of these arguments. Failing to do so could lead to incorrect reuse of cached results. If you wish to avoid re-using cached results, you can set the load_from_disk parameter to False.

  • Explainers Are Expensive To Calculate: Certain explainers, such as TracInCPRandomProj, may lead to OutOfMemory (OOM) issues when applied to large models or datasets. In such cases, we recommend adjusting memory usage by either reducing the dataset size or using smaller models to avoid these issues.

📓 Tutorials

We have included a few tutorials to demonstrate the usage of quanda:

  • Explainers: shows how different explainers can be used with quanda
  • Metrics: shows how to use the metrics in quanda to evaluate the performance of a model
  • Benchmarks: shows how to use the benchmarking tools in quanda to evaluate a data attribution method

To install the library with tutorial dependencies, run:

pip install -e '.[tutorials]'

👩‍💻Contributing

We welcome contributions to quanda! You could contribute by:

  • Opening an issue to report a bug or request a feature.
  • Submitting a pull request to fix a bug, add a new explainer wrapper, a new metric, or another feature.

A detailed guide on how to contribute to quanda can be found here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quanda-0.0.2.tar.gz (21.1 MB view details)

Uploaded Source

Built Distribution

quanda-0.0.2-py3-none-any.whl (102.5 kB view details)

Uploaded Python 3

File details

Details for the file quanda-0.0.2.tar.gz.

File metadata

  • Download URL: quanda-0.0.2.tar.gz
  • Upload date:
  • Size: 21.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for quanda-0.0.2.tar.gz
Algorithm Hash digest
SHA256 0ff639e6c6352170972e11b7ac48524796b83602aade46d16c85e9a1eca5777f
MD5 75ad3f04209c65501f2da987b80e121b
BLAKE2b-256 8f4c5744954c43fad82bb48d7b917df08d0cf9f992e4b51e9f26a12822a16cbf

See more details on using hashes here.

File details

Details for the file quanda-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: quanda-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 102.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for quanda-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c552dca08349b86f44a1366ee474c27e287f036e9b995de2a97a4ee2ea9e189b
MD5 4c0701c5a627296b014c90a37e762c82
BLAKE2b-256 3664ebd7f5bd1b43f6cb0995ae0e81aa874eaf5aa4e1469da83d9f93b822a58c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page