Skip to main content

Interpretability for Sequence Generation Models 🔍

Project description

Intepretability for Sequence Generation Models 🔍


Build status Docs status Version Python Version Downloads License

Follow Inseq on Twitter Follow Inseq on Mastodon

Inseq is a Pytorch-based hackable toolkit to democratize the access to common post-hoc interpretability analyses of sequence generation models.

Installation

Inseq is available on PyPI and can be installed with pip:

pip install inseq

Install extras for visualization in Jupyter Notebooks and 🤗 datasets attribution as pip install inseq[notebook,datasets].

Dev Installation To install the package, clone the repository and run the following commands:
cd inseq
make poetry-download # Download and install the Poetry package manager
make install # Installs the package and all dependencies

If you have a GPU available, use make install-gpu to install the latest torch version with GPU support.

For library developers, you can use the make install-dev command to install and its GPU-friendly counterpart make install-dev-gpu to install all development dependencies (quality, docs, extras).

After installation, you should be able to run make fast-test and make lint without errors.

FAQ Installation
  • Installing the tokenizers package requires a Rust compiler installation. You can install Rust from https://rustup.rs and add $HOME/.cargo/env to your PATH.

  • Installing sentencepiece requires various packages, install with sudo apt-get install cmake build-essential pkg-config or brew install cmake gperftools pkg-config.

Example usage in Python

This example uses the Integrated Gradients attribution method to attribute the English-French translation of a sentence taken from the WinoMT corpus:

import inseq

model = inseq.load_model("Helsinki-NLP/opus-mt-en-fr", "integrated_gradients")
out = model.attribute(
  "The developer argued with the designer because her idea cannot be implemented.",
  n_steps=100
)
out.show()

This produces a visualization of the attribution scores for each token in the input sentence (token-level aggregation is handled automatically). Here is what the visualization looks like inside a Jupyter Notebook:

WinoMT Attribution Map

Inseq also supports decoder-only models such as GPT-2, enabling usage of a variety of attribution methods and customizable settings directly from the console:

import inseq

model = inseq.load_model("gpt2", "integrated_gradients")
model.attribute(
    "Hello ladies and",
    generation_args={"max_new_tokens": 9},
    n_steps=500,
    internal_batch_size=50
).show()

GPT-2 Attribution in the console

Current Features

  • 🚀 Feature attribution of sequence generation for most ForConditionalGeneration (encoder-decoder) and ForCausalLM (decoder-only) models from 🤗 Transformers

  • 🚀 Support for single and batched attribution using multiple gradient-based feature attribution methods from Captum

  • 🚀 Support for basic single-layer and layer-aggregation attention attribution methods with one or multiple aggregated heads.

  • 🚀 Post-hoc aggregation of feature attribution maps via Aggregator classes.

  • 🚀 Attribution visualization in notebooks, browser and command line.

  • 🚀 CLI for attributing single examples or entire 🤗 datasets.

  • 🚀 Custom attribution of target functions, supporting advanced use cases such as contrastive and uncertainty-weighted feature attributions.

  • 🚀 Extraction and visualization of custom step scores (e.g. probability, entropy) alongsides attribution maps.

Planned Development

  • ⚙️ Support more attention-based and occlusion-based feature attribution methods (documented in #107 and #108).

  • ⚙️ Interoperability with ferret for attribution plausibility and faithfulness evaluation.

  • ⚙️ Rich and interactive visualizations in a tabbed interface using Gradio Blocks.

  • ⚙️ Baked-in advanced capabilities for contrastive and uncertainty-weighted feature attribution.

Using the Inseq client

The Inseq library also provides useful client commands to enable repeated attribution of individual examples and even entire 🤗 datasets directly from the console. See the available options by typing inseq -h in the terminal after installing the package.

For now, two commands are supported:

  • ìnseq attribute: Wraps the attribute method shown above, requires explicit inputs to be attributed.

  • inseq attribute-dataset: Enables attribution for a full dataset using Hugging Face datasets.load_dataset.

Both commands support the full range of parameters available for attribute, attribution visualization in the console and saving outputs to disk.

Example: The following command can be used to perform attribution (both source and target-side) of Italian translations for a dummy sample of 20 English sentences taken from the FLORES-101 parallel corpus, using a MarianNMT translation model from Hugging Face transformers. We save the visualizations in HTML format in the file attributions.html. See the --help flag for more options.

inseq attribute-dataset \
  --model_name_or_path Helsinki-NLP/opus-mt-en-it \
  --attribution_method saliency \
  --do_prefix_attribution \
  --dataset_name inseq/dummy_enit \
  --input_text_field en \
  --dataset_split "train[:20]" \
  --viz_path attributions.html \
  --batch_size 8 \
  --hide

Contributing

Our vision for Inseq is to create a centralized, comprehensive and robust set of tools to enable fair and reproducible comparisons in the study of sequence generation models. To achieve this goal, contributions from researchers and developers interested in these topics are more than welcome. Please see our contributing guidelines and our code of conduct for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

inseq-0.3.3.tar.gz (84.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

inseq-0.3.3-py3-none-any.whl (100.6 kB view details)

Uploaded Python 3

File details

Details for the file inseq-0.3.3.tar.gz.

File metadata

  • Download URL: inseq-0.3.3.tar.gz
  • Upload date:
  • Size: 84.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for inseq-0.3.3.tar.gz
Algorithm Hash digest
SHA256 615f295cc3180e1a2f8591ce832053235e42fba16589d1dce3866ce961d903a1
MD5 9d8062ccf28de5447d5c36b4fed020c8
BLAKE2b-256 39c7de765c912a04a7335a12a2659218e111b6bda94abdee72b8c9d02e804c60

See more details on using hashes here.

File details

Details for the file inseq-0.3.3-py3-none-any.whl.

File metadata

  • Download URL: inseq-0.3.3-py3-none-any.whl
  • Upload date:
  • Size: 100.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for inseq-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 02c071124dbf606e5d62d026ec64ecf1779f288051159224bbc5ecba6695c82f
MD5 cab3e0437a18b37ed0938ce2b791bace
BLAKE2b-256 58f0ec7618a26a4f77e3805c0cc8786a15f26188fe4fe8809ee268c8773a058f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page