Transformers Interpret is a model explainability tool designed to work exclusively with 🤗 transformers.
Project description
Transformers Interpret is a model explainability tool designed to work exclusively with the 🤗 transformers package.
In line with the philosophy of the transformers package Tranformers Interpret allows any transformers model to be explained in just two lines. It even supports visualizations in both notebooks and as savable html files.
Install
pip install transformers-interpret
Supported:
- Python >= 3.6
- Pytorch >= 1.5.0
- transformers >= v3.0.0
- captum >= 0.3.1
The package does not work with Python 2.7 or below.
Documentation
Quick Start
Let's start by initializing a transformers' model and tokenizer, and running it through the SequenceClassificationExplainer
.
For this example we are using distilbert-base-uncased-finetuned-sst-2-english
, a distilbert model finetuned on a sentiment analysis task.
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# With both the model and tokenizer initialized we are now able to get explanations on an example text.
from transformers_interpret import SequenceClassificationExplainer
cls_explainer = SequenceClassificationExplainer(
"I love you, I like you",
model,
tokenizer)
attributions = cls_explainer()
Which will return the following list of tuples:
>>> attributions.word_attributions
[('[CLS]', 0.0),
('i', 0.2778544699186709),
('love', 0.7792370723380415),
('you', 0.38560088858031094),
(',', -0.01769750505546915),
('i', 0.12071898121557832),
('like', 0.19091105304734457),
('you', 0.33994871536713467),
('[SEP]', 0.0)]
Positive attribution numbers indicate a word contributes positively towards the predicted class, while negative numbers indicate a word contributes negatively towards the predicted class. Here we can see that I love you gets the most attention.
You can use predicted_class_index
in case you'd want to know what the predicted class actually is:
>>> cls_explainer.predicted_class_index
array(1)
And if the model has label names for each class, we can see these too using predicted_class_name
:
>>> cls_explainer.predicted_class_name
'POSITIVE'
Visualizing attributions
Sometimes the numeric attributions can be difficult to read particularly in instances where there is a lot of text. To help with that we also provide the visualize()
method that utilizes Captum's in built viz library to create a HTML file highlighting the attributions.
If you are in a notebook, calls to the visualize()
method will display the visualization in-line. Alternatively you can pass a filepath in as an argument and an HTML file will be created, allowing you to view the explanation HTML in your browser.
cls_explainer.visualize("distilbert_viz.html")
Explaining Attributions for Non Predicted Class
Attribution explanations are not limited to the predicted class. Let's test a more complex sentence that contains mixed sentiments.
In the example below we pass class_name="NEGATIVE"
as an argument indicating we would like the attributions to be explained for the NEGATIVE class regardless of what the actual prediction is. Effectively because this is a binary classifier we are getting the inverse attributions.
cls_explainer = SequenceClassificationExplainer("I love you, I like you, I also kinda dislike you", model, tokenizer)
attributions = cls_explainer(class_name="NEGATIVE")
In this case, predicted_class_name
still returns a prediction of the POSITIVE class, because the model has generated the same prediction but nonetheless we are interested in looking at the attributions for the negative class regardless of the predicted result.
>>> cls_explainer.predicted_class_name
'POSITIVE'
But when we visualize the attributions we can see that the words "...kinda dislike" are contributing to a prediction of the "NEGATIVE" class.
cls_explainer.visualize("distilbert_negative_attr.html")
Getting attributions for different classes is particularly insightful for multiclass problems as it allows you to inspect model predictions for a number of different classes and sanity-check that the model is "looking" at the right things.
For a detailed explanation of this example please checkout this multiclass classification notebook.
Future Development
This package is still in its early days and there is much more planned. For a 1.0.0 release we're aiming to have:
- Clean and thorough documentation
- Support for Question Answering models
- Support for NER models
- Support for Multiple Choice models
- Ability to show attributions for multiple embedding type, rather than just the word embeddings.
- Additional attribution methods
- In depth examples
A nice logo(thanks @Voyz)- and more... feel free to submit your suggestions!
Questions / Get In Touch
The main contributor to this repository is @cdpierse.
If you have any questions, suggestions, or would like to make a contribution (please do 😁), feel free to get in touch at charlespierse@gmail.com
I'd also highly suggest checking out Captum if you find model explainability and interpretability interesting. They are doing amazing and important work. In fact, this package stands on the shoulders of the the incredible work being done by the teams at Pytorch Captum and Hugging Face and would not exist if not for the amazing job they are both doing in the fields of NLP and model interpretability respectively.
Miscellaneous
Captum Links
Below are some links I used to help me get this package together using Captum. Thank you to @davidefiocco for your very insightful GIST.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file transformers-interpret-0.2.0.tar.gz
.
File metadata
- Download URL: transformers-interpret-0.2.0.tar.gz
- Upload date:
- Size: 15.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.24.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9724a264896315ecd01a6ae6d6a75906c00018952fe98a46ff2b58edf9bc7a76 |
|
MD5 | 724ced8e3ea83344369ef6af161f8065 |
|
BLAKE2b-256 | d2b7fc55e2d05feb6408b1a656c9e9f31073e2e7aac9dfe903c81b7de59e31bd |