Skip to main content

Attention Residuals (AttnRes) kernels

Project description

Flash Attention Residuals

4x faster inference/training vs. torch.compile naive attention residuals implementation

20% reduction in training memory (without activation checkpointing)*

*Benchmarked on H100. Dependent on problem size and setup.

Reference: https://arxiv.org/abs/2603.15031 (Kimi Team, MoonshotAI, 2026)

Credits:

Thanks to Mohamed Osman (https://github.com/spaghettiSystems) and Cartesia (https://github.com/cartesia-ai) for advising on and supporting the development of this project.

Install

pip install flash-attn-res

Usage

This package contains Triton kernels, triton_op wrappers compatible with torch.compile, and an experimental high-performance Block AttenRes autograd implementation. See src and benchmarks folders.

Roadmap:

  • Better autotuning set up
  • Better benchmarks
  • More robust autograd impl.
  • Precision tuning
  • Mixed FP16 and BF16 and store quantization scale
  • Stochastic rounding
  • CuTE, CUDA, and other DSLs implementation

Development Notes:

  • Normalizing in phase 1 keeps outputs bounded (convex combination of values) so bf16 error doesn't scale with softmax flatness. Phase 2 computes in fp32, and the reduction algebra matches split-KV Flash Attention.
  • Certain dimensions, especially NUM_QUERIES_PER_BLOCK, are small so semi-elementwise (B, T) kernel with static_range is better than doing tl.dot
  • Kernel is memory bound and doing semi-elementwise allows for kernel fusion
  • NUM_SOURCE_BLOCKS and NUM_QUERIES_PER_BLOCK should be autotuning keys, unlike with torch.compile, which allows for faster kernels
  • Small NUM_QUERIES_PER_BLOCK so eviction_policy should be "evict_last"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_res-0.1.12.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_res-0.1.12-py2.py3-none-any.whl (18.3 kB view details)

Uploaded Python 2Python 3

File details

Details for the file flash_attn_res-0.1.12.tar.gz.

File metadata

  • Download URL: flash_attn_res-0.1.12.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for flash_attn_res-0.1.12.tar.gz
Algorithm Hash digest
SHA256 0f2b0bd6030a4d4af246229519dbaecdb1a2988c5113717b305baead904de9bf
MD5 c2055e52b3c1dda3512a0bfad57dea31
BLAKE2b-256 4c13f98fe7f640ff73a3cb642cf158fba6d6d918a5e1314d1fb0f6e90e6d817a

See more details on using hashes here.

File details

Details for the file flash_attn_res-0.1.12-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for flash_attn_res-0.1.12-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 9e88c9dd69b74e4ac3d5958311b52b875ab6042a0e606eb9897090d7f988d714
MD5 2823714bd9e8599e6a825d267a4c6353
BLAKE2b-256 5f7c5547a0235b9e7c83ed46f94ed15a50acd276a05ab9938d494992ec0e2fbb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page