Skip to main content

Attention Residuals (AttnRes) kernels

Project description

Flash Attention Residuals

1.4x faster inference/training vs. an optimized torch.compile impl. of the paper’s two-phase batched attention with online softmax

20% reduction in training memory (without activation checkpointing)*

*Benchmarked on H100. Dependent on problem size and setup.

Credits:

Thanks to Mohamed Osman (https://github.com/spaghettiSystems) and Cartesia for advising on and supporting the development of this kernel.

Install

pip install flash-attn-res

Roadmap:

  • Proper backward eval
  • Implement in CuTE and CUDA
  • Tune precision
  • Mixed FP16 and BF16 and store quantization scale
  • Stochastic rounding
  • Make into Python package

Insights:

  • Normalizing in phase 1 keeps outputs bounded (convex combination of values) so bf16 error doesn't scale with softmax flatness. Phase 2 computes in fp32, and the reduction algebra matches split-KV Flash Attention.
  • Certain dimensions, especially NUM_QUERIES_PER_BLOCK, are small so semi-elementwise (B, T) kernel with static_range is better than doing tl.dot
  • Kernel is memory bound and doing semi-elementwise allows for kernel fusion
  • NUM_SOURCE_BLOCKS and NUM_QUERIES_PER_BLOCK should be autotuning keys, unlike with torch.compile, which allows for faster kernels
  • Small NUM_QUERIES_PER_BLOCK so eviction_policy should be "evict_last"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_res-0.1.4.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_res-0.1.4-py2.py3-none-any.whl (15.6 kB view details)

Uploaded Python 2Python 3

File details

Details for the file flash_attn_res-0.1.4.tar.gz.

File metadata

  • Download URL: flash_attn_res-0.1.4.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for flash_attn_res-0.1.4.tar.gz
Algorithm Hash digest
SHA256 aba5f84c4ec50c8947dc5562943adf6a1592a4d0abbb2acb0beb42cd25f3174d
MD5 242e176426f7183e9bd7863581b4c18c
BLAKE2b-256 3c27b3191079ef6672f2a48472494552f4597fb75d531e9ae8ce6c4343105a6f

See more details on using hashes here.

File details

Details for the file flash_attn_res-0.1.4-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for flash_attn_res-0.1.4-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 52e0543da62904dd25ed931faea1038dcea23576a42c703bb544da815edb1a18
MD5 4e34317a2791c719192353f77d4a4059
BLAKE2b-256 92fcf0ee36745ca9843d528935c5c7efe22416d9b8a0b3429e04494f5c6abc26

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page