Attention Free Transformer - Pytorch
Project description
aft-pytorch
Unofficial PyTorch implementation of the Attention Free Transformer's AFT-Full layer by Zhai, et al. [abs, pdf] from Apple Inc.
Installation
You can install aft-pytorch
via pip
:
pip install aft-pytorch
Usage
You can import the "AFT-Full" layer (as described in the paper) from the package like so:
from aft_pytorch import AFTFullAttention
layer = AFTFullAttention(
seqlen=10,
dim=512,
hidden_dim=64,
heads=8
)
# a batch of sequences with 10 timesteps of length 512 each
x = torch.rand(32, 10, 512)
y = layer(x) # [32, 10, 512]
This layer wrapper is a 'plug-and-play' with your existing networks / Transformers. You can swap out the Self-Attention layer with the
AFTFullAttention
layer with minimal changes.
TODO
- Add full AFT architecture
- Add variants like AFT-Simple, AFT-Conv, AFT-Local
Contributing
If you like this repo, please leave a star! If there are any amends or suggestions, feel free to raise a PR/issue.
Credits
@misc{
zhai2021an,
title={An Attention Free Transformer},
author={Shuangfei Zhai and Walter Talbott and Nitish Srivastava and Chen Huang and Hanlin Goh and Joshua M. Susskind},
year={2021},
url={https://openreview.net/forum?id=pW--cu2FCHY}
}
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
aft_pytorch-0.0.4.tar.gz
(3.4 kB
view hashes)
Built Distribution
Close
Hashes for aft_pytorch-0.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 70b3f36a925dfff451066618f02adaf034fb8bea82f893d1fae148432d427884 |
|
MD5 | cb08a63457db74e048054e5212e16e5c |
|
BLAKE2b-256 | 4cad89e9eee557258be16d21b0e90fe98e8382e5cca8c9bca56827502a46e91d |