Attention Free Transformer - Pytorch
Project description
aft-pytorch
Unofficial PyTorch implementation of the Attention Free Transformer's AFT-Full layer by Zhai, et al. [abs, pdf] from Apple Inc.
Installation
You can install aft-pytorch
via pip
:
pip install aft-pytorch
Usage
You can import the "AFT-Full" layer (as described in the paper) from the package like so:
from aft_pytorch import AFTFullAttention
layer = AFTFullAttention(
dim=512,
hidden_dim=64,
heads=8
)
# a batch of sequences with 10 timesteps of length 512 each
x = torch.rand(32, 10, 512)
y = layer(x) # [32, 10, 512]
This layer wrapper is a 'plug-and-play' with your existing networks / Transformers. You can swap out the Self-Attention layer with the
AFTFullAttention
layer with minimal changes.
TODO
- Add full AFT architecture
- Add variants like AFT-Simple, AFT-Conv, AFT-Local
Contributing
If you like this repo, please leave a star! If there are any amends or suggestions, feel free to raise a PR/issue.
Credits
@misc{
zhai2021an,
title={An Attention Free Transformer},
author={Shuangfei Zhai and Walter Talbott and Nitish Srivastava and Chen Huang and Hanlin Goh and Joshua M. Susskind},
year={2021},
url={https://openreview.net/forum?id=pW--cu2FCHY}
}
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
aft_pytorch-0.0.5.tar.gz
(3.4 kB
view hashes)
Built Distribution
Close
Hashes for aft_pytorch-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 107e946d48a73c375835a02a62a3d32e962999a50625e26d69798105144f5536 |
|
MD5 | 26c6af34d4fd656c7c34e9fb7bf4350a |
|
BLAKE2b-256 | 2b926c2c04ad9be268656129304b0f346e550776095ce737541699d2e473b3ea |