Skip to main content

Interpretability for Sequence Generation Models 🔍

Project description

Intepretability for Sequence Generation Models 🔍


Build status Docs status Version Python Version Downloads License Demo Paper

Follow Inseq on Twitter Join the Inseq Discord server

Inseq is a Pytorch-based hackable toolkit to democratize the access to common post-hoc interpretability analyses of sequence generation models.

Installation

Inseq is available on PyPI and can be installed with pip for Python >= 3.9, <= 3.11:

# Install latest stable version
pip install inseq

# Alternatively, install latest development version
pip install git+https://github.com/inseq-team/inseq.git

Install extras for visualization in Jupyter Notebooks and 🤗 datasets attribution as pip install inseq[notebook,datasets].

Dev Installation To install the package, clone the repository and run the following commands:
cd inseq
make poetry-download # Download and install the Poetry package manager
make install # Installs the package and all dependencies

If you have a GPU available, use make install-gpu to install the latest torch version with GPU support.

For library developers, you can use the make install-dev command to install and its GPU-friendly counterpart make install-dev-gpu to install all development dependencies (quality, docs, extras).

After installation, you should be able to run make fast-test and make lint without errors.

FAQ Installation
  • Installing the tokenizers package requires a Rust compiler installation. You can install Rust from https://rustup.rs and add $HOME/.cargo/env to your PATH.

  • Installing sentencepiece requires various packages, install with sudo apt-get install cmake build-essential pkg-config or brew install cmake gperftools pkg-config.

  • Inseq does not work with older versions of jaxlib. Install a compatible version with poetry install --with jax.

Example usage in Python

This example uses the Integrated Gradients attribution method to attribute the English-French translation of a sentence taken from the WinoMT corpus:

import inseq

model = inseq.load_model("Helsinki-NLP/opus-mt-en-fr", "integrated_gradients")
out = model.attribute(
  "The developer argued with the designer because her idea cannot be implemented.",
  n_steps=100
)
out.show()

This produces a visualization of the attribution scores for each token in the input sentence (token-level aggregation is handled automatically). Here is what the visualization looks like inside a Jupyter Notebook:

WinoMT Attribution Map

Inseq also supports decoder-only models such as GPT-2, enabling usage of a variety of attribution methods and customizable settings directly from the console:

import inseq

model = inseq.load_model("gpt2", "integrated_gradients")
model.attribute(
    "Hello ladies and",
    generation_args={"max_new_tokens": 9},
    n_steps=500,
    internal_batch_size=50
).show()

GPT-2 Attribution in the console

Features

  • 🚀 Feature attribution of sequence generation for most ForConditionalGeneration (encoder-decoder) and ForCausalLM (decoder-only) models from 🤗 Transformers

  • 🚀 Support for multiple feature attribution methods, sourced in part from Captum

  • 🚀 Post-processing of attribution maps via Aggregator classes.

  • 🚀 Attribution visualization in notebooks, browser and command line.

  • 🚀 Attribute single examples or entire 🤗 datasets with the Inseq CLI.

  • 🚀 Custom attribution of target functions, supporting advanced use cases such as contrastive and uncertainty-weighted feature attributions.

  • 🚀 Extraction and visualization of custom step scores (e.g. probability, entropy) alongsides attribution maps.

Supported methods

Use the inseq.list_feature_attribution_methods function to list all available method identifiers and inseq.list_step_functions to list all available step functions. The following methods are currently supported:

Gradient-based attribution

Internals-based attribution

Perturbation-based attribution

Step functions

Step functions are used to extract custom scores from the model at each step of the attribution process with the step_scores argument in model.attribute. They can also be used as targets for attribution methods relying on model outputs (e.g. gradient-based methods) by passing them as the attributed_fn argument. The following step functions are currently supported:

  • logits: Logits of the target token.
  • probability: Probability of the target token. Can also be used for log-probability by passing logprob=True.
  • entropy: Entropy of the predictive distribution.
  • crossentropy: Cross-entropy loss between target token and predicted distribution.
  • perplexity: Perplexity of the target token.
  • contrast_logits/contrast_prob: Logits/probabilities of the target token when different contrastive inputs are provided to the model. Equivalent to logits/probability when no contrastive inputs are provided.
  • contrast_logits_diff/contrast_prob_diff: Difference in logits/probability between original and foil target tokens pair, can be used for contrastive evaluation as in contrastive attribution (Yin and Neubig, 2022).
  • pcxmi: Point-wise Contextual Cross-Mutual Information (P-CXMI) for the target token given original and contrastive contexts (Yin et al. 2021).
  • kl_divergence: KL divergence of the predictive distribution given original and contrastive contexts. Can be restricted to most likely target token options using the top_k and top_p parameters.
  • in_context_pvi: In-context Pointwise V-usable Information (PVI) to measure the amount of contextual information used in model predictions (Lu et al. 2023).
  • mc_dropout_prob_avg: Average probability of the target token across multiple samples using MC Dropout (Gal and Ghahramani, 2016).
  • top_p_size: The number of tokens with cumulative probability greater than top_p in the predictive distribution of the model.

The following example computes contrastive attributions using the contrast_prob_diff step function:

import inseq

attribution_model = inseq.load_model("gpt2", "input_x_gradient")

# Perform the contrastive attribution:
# Regular (forced) target -> "The manager went home because he was sick"
# Contrastive target      -> "The manager went home because she was sick"
out = attribution_model.attribute(
    "The manager went home because",
    "The manager went home because he was sick",
    attributed_fn="contrast_prob_diff",
    contrast_targets="The manager went home because she was sick",
    # We also visualize the corresponding step score
    step_scores=["contrast_prob_diff"]
)
out.show()

Refer to the documentation for an example including custom function registration.

Using the Inseq client

The Inseq library also provides useful client commands to enable repeated attribution of individual examples and even entire 🤗 datasets directly from the console. See the available options by typing inseq -h in the terminal after installing the package.

For now, two commands are supported:

  • ìnseq attribute: Wraps the attribute method shown above, requires explicit inputs to be attributed.

  • inseq attribute-dataset: Enables attribution for a full dataset using Hugging Face datasets.load_dataset.

Both commands support the full range of parameters available for attribute, attribution visualization in the console and saving outputs to disk.

Example: The following command can be used to perform attribution (both source and target-side) of Italian translations for a dummy sample of 20 English sentences taken from the FLORES-101 parallel corpus, using a MarianNMT translation model from Hugging Face transformers. We save the visualizations in HTML format in the file attributions.html. See the --help flag for more options.

inseq attribute-dataset \
  --model_name_or_path Helsinki-NLP/opus-mt-en-it \
  --attribution_method saliency \
  --do_prefix_attribution \
  --dataset_name inseq/dummy_enit \
  --input_text_field en \
  --dataset_split "train[:20]" \
  --viz_path attributions.html \
  --batch_size 8 \
  --hide

Planned Development

  • ⚙️ Support more attention-based and occlusion-based feature attribution methods (documented in #107 and #108).

  • ⚙️ Interoperability with ferret for attribution plausibility and faithfulness evaluation.

  • ⚙️ Rich and interactive visualizations in a tabbed interface using Gradio Blocks.

Contributing

Our vision for Inseq is to create a centralized, comprehensive and robust set of tools to enable fair and reproducible comparisons in the study of sequence generation models. To achieve this goal, contributions from researchers and developers interested in these topics are more than welcome. Please see our contributing guidelines and our code of conduct for more information.

Citing Inseq

If you use Inseq in your research we suggest to include a mention to the specific release (e.g. v0.4.0) and we kindly ask you to cite our reference paper as:

@inproceedings{sarti-etal-2023-inseq,
    title = "Inseq: An Interpretability Toolkit for Sequence Generation Models",
    author = "Sarti, Gabriele  and
      Feldhus, Nils  and
      Sickert, Ludwig  and
      van der Wal, Oskar and
      Nissim, Malvina and
      Bisazza, Arianna",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-demo.40",
    doi = "10.18653/v1/2023.acl-demo.40",
    pages = "421--435",
}

Research using Inseq

Inseq has been used in various research projects. A list of known publications that use Inseq to conduct interpretability analyses of generative models is shown below. If you know more, please let us know or submit a pull request (last updated: December 2023).

2023
  1. Inseq: An Interpretability Toolkit for Sequence Generation Models (Sarti et al., 2023)
  2. Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation (Edman et al., 2023)
  3. Response Generation in Longitudinal Dialogues: Which Knowledge Representation Helps? (Mousavi et al., 2023)
  4. Quantifying the Plausibility of Context Reliance in Neural Machine Translation (Sarti et al., 2023)
  5. A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation (Attanasio et al., 2023)
  6. Assessing the Reliability of Large Language Model Knowledge (Wang et al., 2023)
  7. Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue (Molnar et al., 2023)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

inseq-0.5.0.tar.gz (118.9 kB view details)

Uploaded Source

Built Distribution

inseq-0.5.0-py3-none-any.whl (136.7 kB view details)

Uploaded Python 3

File details

Details for the file inseq-0.5.0.tar.gz.

File metadata

  • Download URL: inseq-0.5.0.tar.gz
  • Upload date:
  • Size: 118.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.6 Darwin/23.1.0

File hashes

Hashes for inseq-0.5.0.tar.gz
Algorithm Hash digest
SHA256 01588ab3ce1247335abda13b96e5331b0929d15e4a55003f77e3a83348d1cc9f
MD5 e7b67a6a700bf13d11ab91024ed9202f
BLAKE2b-256 380e36a912709b5a19774df6688e963dd9f48a7ab77963ad64fffcf13873575c

See more details on using hashes here.

File details

Details for the file inseq-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: inseq-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 136.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.6 Darwin/23.1.0

File hashes

Hashes for inseq-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 db8b22e283daf143247a2ff9f6471eec9feb6dd85be9c3733945e723cd47f94c
MD5 70452be2f550171222283f7deb07b868
BLAKE2b-256 9a397bb3b82dd96e24a884aaeb32bd82684d5e9bf1ca12a26d18d5aefd805972

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page