A bunch of transformer implementations
Project description
Transformer Implementations
Transformer Implementations and some examples with them
Implemented:
- Vanilla Transformer
- ViT - Vision Transformers
- DeiT - Data efficient image Transformers
Installation
$ pip install transformer-implementations
Language Translation
from "Attention is All You Need": https://arxiv.org/pdf/1706.03762.pdf
Models trained with Implementation:
Multi-class Image Classification with Vision Transformers (ViT)
from "An Image is Worth 16x16 words: Transformers for image recognition at scale": https://arxiv.org/pdf/2010.11929v1.pdf
Models trained with Implementation:
Multi-class Image Classification with Data-efficient image Transformers (DeiT)
from "Training data-efficient image transformers & distillation through attention": https://arxiv.org/pdf/2012.12877v1.pdf
Models trained with Implementation:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for transformer_implementations-0.0.6.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 16c2df20855cf50bb3077df792ffd5d4df0c654b67c473691c3b78d70a961134 |
|
MD5 | 49681908629705c48774564b26b00c53 |
|
BLAKE2b-256 | 00fd33e0b6a6fb771af555a9c76007e05d43022752878c6c3bb27c08b3edc81f |
Close
Hashes for transformer_implementations-0.0.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a6e723671aee45707bb4ff36a9e9a2c6a0c9d7e2a1164c4454376b5530fd7c08 |
|
MD5 | bac3df48c900f191f37a27331bc3dc1c |
|
BLAKE2b-256 | 12019c9cf4ccac475d0dc5823d45388bf1272b2a881f24668a02d3e6d3bc3634 |