Explain your 🤗 transformers without effort! Display the internal behavior of your model.
Project description
Transformers visualizer
Explain your 🤗 transformers without effort!
Transformers visualizer is a python package designed to work with the 🤗 transformers package. Given a model
and a tokenizer
, this package supports multiple ways to explain your model by plotting its internal behavior.
This package is mostly based on the Captum tutorials [1] [2].
Installation
pip install transformers-visualizer
Quickstart
Let's define a model, a tokenizer and a text input for the following examples.
from transformers import AutoModel, AutoTokenizer
model_name = "bert-base-uncased"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder."
Visualizers
Attention matrices of a specific layer
from transformers_visualizer import TokenToTokenAttentions
visualizer = TokenToTokenAttentions(model, tokenizer)
visualizer(text)
Instead of using __call__
function, you can use the compute
method. Both work in place, compute
method allows chaining method.
plot
method accept a layer index as parameter to specify which part of your model you want to plot. By default, the last layer is plotted.
import matplotlib.pyplot as plt
visualizer.plot(layer_index = 6)
plt.savefig("token_to_token.jpg")
Attention matrices normalized across head axis
You can specify the order
used in torch.linalg.norm
in __call__
and compute
methods. By default, it's a L2 norm.
from transformers_visualizer import TokenToTokenNormalizedAttentions
visualizer = TokenToTokenNormalizedAttentions(model, tokenizer)
visualizer.compute(text).plot()
Upcoming features
- Adding an option to specify head/layer indices to plot.
- Adding other plotting backends such as Plotly, Bokeh, Altair.
- Implement other visualizers such as vector norm.
References
Acknowledgements
- Transformers Interpret for the idea of this project.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for transformers_visualizer-0.2.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | e84a3085b1fdcaf0b2fd28f32adf2c251a6101a152d9fd887bfb13fa6ee153c5 |
|
MD5 | e7579a465a41372438234b4f504eba2f |
|
BLAKE2b-256 | 3fc04a5fc71e874cb4f57794da0fc42b32ab9a903190f13152c64c127d48d80f |
Hashes for transformers_visualizer-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b0426e71c9ba674104eb9c5e4cdc20337c37f42a4e9c17e67bd23cced0417783 |
|
MD5 | f137bbe64aaec8f811f1354726652024 |
|
BLAKE2b-256 | cdf5fa51ce83e53941291a7256cd18ef2eca8428b624fafc233e78066f564d1b |