A bunch of transformer implementations
Project description
Transformer Implementations
Transformer Implementations and some examples with them
Implemented:
- Vanilla Transformer
- ViT - Vision Transformers
- DeiT - Data efficient image Transformers
Installation
$ pip install transformer-implementations
Language Translation
from "Attention is All You Need": https://arxiv.org/pdf/1706.03762.pdf
Models trained with Implementation:
Multi-class Image Classification with Vision Transformers (ViT)
from "An Image is Worth 16x16 words: Transformers for image recognition at scale": https://arxiv.org/pdf/2010.11929v1.pdf
Models trained with Implementation:
Multi-class Image Classification with Data-efficient image Transformers (DeiT)
from "Training data-efficient image transformers & distillation through attention": https://arxiv.org/pdf/2012.12877v1.pdf
Models trained with Implementation:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for transformer_implementations-0.0.3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | dc9b4a21dd24a48800c8ee15d24a90bdee25f514efcd268a525ea7fd0545410a |
|
MD5 | 53da669f3b09cab2f2687e78e3561344 |
|
BLAKE2b-256 | 10d77f6886d270a2529fa5ca71dab87bc3cf710a395157ffbeeb661e77e8efc7 |
Close
Hashes for transformer_implementations-0.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 20e7c2032c60a5e20517d991aa299bba91b6796e281c64e1fa0e65d3e90c08fd |
|
MD5 | b4bb6236807fe0a070835cb319ac5941 |
|
BLAKE2b-256 | a0a904249215cdc5f66211d3fe241931eda46226f33609cdc070f0bb88187c71 |