Skip to main content

Attention Residuals (AttnRes) kernels

Project description

Flash Attention Residuals

4x faster inference/training vs. torch.compile naive attention residuals implementation

20% reduction in training memory (without activation checkpointing)*

*Benchmarked on H100. Dependent on problem size and setup.

Reference: https://arxiv.org/abs/2603.15031 (Kimi Team, MoonshotAI, 2026)

Credits:

Thanks to Mohamed Osman (https://github.com/spaghettiSystems) and Cartesia (https://github.com/cartesia-ai) for advising on and supporting the development of this project.

Install

pip install flash-attn-res

Usage

This package contains Triton kernels, triton_op wrappers compatible with torch.compile, and an experimental high-performance Block AttenRes autograd implementation. See src and benchmarks folders.

Roadmap:

  • Better autotuning defaults
  • Better benchmarks
  • More robust autograd impl.
  • Precision tuning
  • Mixed FP16 and BF16 and store quantization scale
  • Stochastic rounding
  • CuTE, CUDA, and other DSLs implementation

Development Notes:

  • Normalizing in phase 1 keeps outputs bounded (convex combination of values) so bf16 error doesn't scale with softmax flatness. Phase 2 computes in fp32, and the reduction algebra matches split-KV Flash Attention.
  • Certain dimensions, especially NUM_QUERIES_PER_BLOCK, are small so semi-elementwise (B, T) kernel with static_range is better than doing tl.dot
  • Kernel is memory bound and doing semi-elementwise allows for kernel fusion
  • NUM_SOURCE_BLOCKS and NUM_QUERIES_PER_BLOCK should be autotuning keys, unlike with torch.compile, which allows for faster kernels
  • Small NUM_QUERIES_PER_BLOCK so eviction_policy should be "evict_last"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_res-0.1.11.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_res-0.1.11-py2.py3-none-any.whl (17.7 kB view details)

Uploaded Python 2Python 3

File details

Details for the file flash_attn_res-0.1.11.tar.gz.

File metadata

  • Download URL: flash_attn_res-0.1.11.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for flash_attn_res-0.1.11.tar.gz
Algorithm Hash digest
SHA256 1e15546883d6681c92188f87c330ebbdc23d7f6316f66d3e5a21eb1deb8a9e0b
MD5 6e038afbff61b5db2f8b70c53724e41a
BLAKE2b-256 7c2f90f9e16525fb511343f4120e0c55954eb7fda68fbb2368f327ccc451a992

See more details on using hashes here.

File details

Details for the file flash_attn_res-0.1.11-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for flash_attn_res-0.1.11-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 b6da604284bf2c4ed15674da59d6e3e08137bf9129fc42daebf51ac1d8a85318
MD5 6f50fbe87521dc02d0283aeac4c0cfef
BLAKE2b-256 3e4b7e020b5af80f6be751e5b6eb3fd929addee1d3c5dfa26342918620ed08fc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page