Explain your 🤗 transformers without effort! Display the internal behavior of your model.
Project description
Transformers visualizer
Explain your 🤗 transformers without effort!
Transformers visualizer is a python package designed to work with 🤗 transformers package. Given a model
and a tokenizer
, this package supports multiple ways to explain your model by plotting its internal behavior.
This package is mostly based on the Captum tutorials [1] [2].
Installation
pip install transformers-visualizer
Quickstart
Let's define a model, a tokenizer and a text input for the following examples.
from transformers import AutoModel, AutoTokenizer
model_name = "bert-base-uncased"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder."
Attention matrices
Plot attention matrices of a specific layer
from transformers_visualizer import TokenToTokenAttentions
visualizer = TokenToTokenAttentions(model, tokenizer)
visualizer(text)
Instead of using __call__
function, you can use the compute
method. Both work in place, compute
method allows chaining method.
plot
method accept a layer index as parameter to specify which part of your model you want to plot. By default, the last layer is plotted.
import matplotlib.pyplot as plt
visualizer.plot(layer_index = 6)
plt.savefig("token_to_token.jpg")
Plot attention matrices normalized on head axis
You can specify the order
used in torch.linalg.norm
in __call__
and compute
methods. By default, it's a L2 norm.
from transformers_visualizer import TokenToTokenNormalizedAttentions
visualizer = TokenToTokenNormalizedAttentions(model, tokenizer)
visualizer.compute(text).plot()
Upcoming features
- Adding an option to specify head/layer indices to plot.
- Adding other plotting backends such as Plotly, Bokeh, Altair.
- Implement other visualizers such as vector norm.
References
Acknowledgements
- Transformers Interpret for the idea of this project.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for transformers_visualizer-0.2.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | a8f07924b64032b0f4ff62ed74f38a4cd8586374a34ec0ff8c41c748f12f0012 |
|
MD5 | 4c52eceb5790cc04268523dfd34ef95c |
|
BLAKE2b-256 | ff7674ffee8ce155d7a95a19e7c494308184de42a7f58d8d986cbd4bd9264ee6 |
Hashes for transformers_visualizer-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 21c7e9136edd3469ac6167f5b21a77c663800abb0d74e7cfdf06f74191039bbb |
|
MD5 | 72b40428e05c5d52af237f12d05fb4e3 |
|
BLAKE2b-256 | cdb723c3a5301672dbdb01f6f7d4259d109f78ae74b27588c6553cdb496c27cb |