Skip to main content

Attention Residuals (AttnRes) kernels

Project description

Flash Attention Residuals

1.4x faster inference/training vs. an optimized torch.compile impl. of the paper’s two-phase batched attention with online softmax

20% reduction in training memory (without activation checkpointing)*

*Benchmarked on H100. Dependent on problem size and setup.

Credits:

Thanks to Mohamed Osman (https://github.com/spaghettiSystems) and Cartesia for advising on and supporting the development of this kernel.

Roadmap:

  • Proper backward eval
  • Implement in CuTE and CUDA
  • Tune precision
  • Mixed FP16 and BF16 and store quantization scale
  • Stochastic rounding
  • Make into Python package

Insights:

  • Normalizing in phase 1 keeps outputs bounded (convex combination of values) so bf16 error doesn't scale with softmax flatness. Phase 2 computes in fp32, and the reduction algebra matches split-KV Flash Attention.
  • Certain dimensions, especially NUM_QUERIES_PER_BLOCK, are small so semi-elementwise (B, T) kernel with static_range is better than doing tl.dot
  • Kernel is memory bound and doing semi-elementwise allows for kernel fusion
  • NUM_SOURCE_BLOCKS and NUM_QUERIES_PER_BLOCK should be autotuning keys, unlike with torch.compile, which allows for faster kernels
  • Small NUM_QUERIES_PER_BLOCK so eviction_policy should be "evict_last"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_res-0.1.1.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_res-0.1.1-py2.py3-none-any.whl (15.6 kB view details)

Uploaded Python 2Python 3

File details

Details for the file flash_attn_res-0.1.1.tar.gz.

File metadata

  • Download URL: flash_attn_res-0.1.1.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for flash_attn_res-0.1.1.tar.gz
Algorithm Hash digest
SHA256 ae7fd8f5b888fdbf0c53a8e5147ca4440d113b7b75360487cd5fd1f4c43f086f
MD5 ea4cd9a66ee6c5b70aba2c103f10ce9a
BLAKE2b-256 0c411de915e74670968cfd6e838f94cf1598ab238c34f5b730a3f093a4d555f3

See more details on using hashes here.

File details

Details for the file flash_attn_res-0.1.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for flash_attn_res-0.1.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 c5f84076bed76f95a3bc04db35e271c21d52c7072edaea3714237b0de5267937
MD5 b202ce3f10ed4678ce1a44895edbd635
BLAKE2b-256 ee32b7ca18c4bb429d677739bc454bd054e212db029ee563a799c0cfe3ec2a36

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page