A bunch of transformer implementations
Project description
Transformer Implementations
Transformer Implementations and some examples with them
Implemented:
- Vanilla Transformer
- ViT - Vision Transformers
- DeiT - Data efficient image Transformers
Installation
$ pip install transformer-implementations
Language Translation
from "Attention is All You Need": https://arxiv.org/pdf/1706.03762.pdf
Models trained with Implementation:
Multi-class Image Classification with Vision Transformers (ViT)
from "An Image is Worth 16x16 words: Transformers for image recognition at scale": https://arxiv.org/pdf/2010.11929v1.pdf
Models trained with Implementation:
Multi-class Image Classification with Data-efficient image Transformers (DeiT)
from "Training data-efficient image transformers & distillation through attention": https://arxiv.org/pdf/2012.12877v1.pdf
Models trained with Implementation:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for transformer_implementations-0.0.4.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | e1912fd9bfdcd6d0a45a76dba00102952c5dda899085804525be7d087f55ab68 |
|
MD5 | 14872ea994b1385857ccc068c382556d |
|
BLAKE2b-256 | 25c2a5080014213a071624971fd9ac9c9a54a1854c273a9b7e0a0c12b7f8bba4 |
Close
Hashes for transformer_implementations-0.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 97f7a9cbd9b0c8271850af646d6622f8a6fc2e20abc81e4d06276c908844fffb |
|
MD5 | 98b9712d8941c45b24b4724f821ed45a |
|
BLAKE2b-256 | 72472c9de2088e3bebade68c4978b93090542b040e8b0ea126f61196cc610bb9 |