Skip to main content

Implementation on PyTorch of Self-attention Does Not Need $O(n^2)$ Memory

Project description

faster-transformer

Self-attention Does Not Need $O(n^2)$ MemoryのPytorch実装

@misc{rabe2021selfattention,
    title={Self-attention Does Not Need $O(n^2)$ Memory}, 
    author={Markus N. Rabe and Charles Staats},
    year={2021},
    eprint={2112.05682},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file memory-efficient-transformer-pytorch-0.2.0.tar.gz.

File metadata

File hashes

Hashes for memory-efficient-transformer-pytorch-0.2.0.tar.gz
Algorithm Hash digest
SHA256 a3c4923295a5679decd9c3f338b606b7cdeb91a8beb286d4e29e19c2a99690f3
MD5 a3d1a7ee68a5396eb3a465c5423a40d4
BLAKE2b-256 07a6382c4db04af8409dbdc2df4b120c3d43b5f619cc69dccbade1cd4f3dad0e

See more details on using hashes here.

File details

Details for the file memory_efficient_transformer_pytorch-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for memory_efficient_transformer_pytorch-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6569f3b6d73cc59e554ccfdc98cce32a12fa3517ae9122eede30f9adb62231ee
MD5 9422213987376dc288142cfe42187947
BLAKE2b-256 32daa2ce9b4f4793ebe59ad86e867446c65d1d502a7eb17a47e365f58a22b268

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page