A bunch of transformer implementations
Project description
Transformer Implementations
Transformer Implementations and some examples with them
Implemented:
- Vanilla Transformer
- ViT - Vision Transformers
- DeiT - Data efficient image Transformers
Installation
$ pip install transformer-implementations
Language Translation
from "Attention is All You Need": https://arxiv.org/pdf/1706.03762.pdf
Models trained with Implementation:
Multi-class Image Classification with Vision Transformers (ViT)
from "An Image is Worth 16x16 words: Transformers for image recognition at scale": https://arxiv.org/pdf/2010.11929v1.pdf
Models trained with Implementation:
Multi-class Image Classification with Data-efficient image Transformers (DeiT)
from "Training data-efficient image transformers & distillation through attention": https://arxiv.org/pdf/2012.12877v1.pdf
Models trained with Implementation:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for transformer_implementations-0.0.5.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1e9a1118eb687dc76e8ffa439ddf27c96609c6bf771311a59001a3dc04c05cc9 |
|
MD5 | 19ed1ea9e2be6efd0d61e6231ea9416a |
|
BLAKE2b-256 | 76fc44d4f3706a6858dc2551f7ff6149fba5c8a40f4976ec3343e4df089db636 |
Close
Hashes for transformer_implementations-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f062ed7610e58dd82ce92fc818880b227af25f9bb87d2afa1ca5f479112d13dc |
|
MD5 | a59ddefed7542386417f460d1b8fbdc4 |
|
BLAKE2b-256 | 29b247fb8ad0df608e4f9b73b117854ba00135a152057fe16f2452db3d3bd121 |