Skip to main content

Attention Residuals (AttnRes) kernels

Project description

Flash Attention Residuals

4x faster inference/training vs. torch.compile naive attention residuals implementation

20% reduction in training memory (without activation checkpointing)*

*Benchmarked on H100. Dependent on problem size and setup.

Reference: https://arxiv.org/abs/2603.15031 (Kimi Team, MoonshotAI, 2026)

Credits:

Thanks to Mohamed Osman (https://github.com/spaghettiSystems) and Cartesia (https://github.com/cartesia-ai) for advising on and supporting the development of this project.

Install

pip install flash-attn-res

Usage

This package contains Triton kernels, triton_op wrappers compatible with torch.compile, and an experimental high-performance Block AttenRes autograd implementation. See src and benchmarks folders.

Roadmap:

  • Better autotuning defaults
  • Better benchmarks
  • More robust autograd impl.
  • Precision tuning
  • Mixed FP16 and BF16 and store quantization scale
  • Stochastic rounding
  • CuTE, CUDA, and other DSLs implementation

Development Notes:

  • Normalizing in phase 1 keeps outputs bounded (convex combination of values) so bf16 error doesn't scale with softmax flatness. Phase 2 computes in fp32, and the reduction algebra matches split-KV Flash Attention.
  • Certain dimensions, especially NUM_QUERIES_PER_BLOCK, are small so semi-elementwise (B, T) kernel with static_range is better than doing tl.dot
  • Kernel is memory bound and doing semi-elementwise allows for kernel fusion
  • NUM_SOURCE_BLOCKS and NUM_QUERIES_PER_BLOCK should be autotuning keys, unlike with torch.compile, which allows for faster kernels
  • Small NUM_QUERIES_PER_BLOCK so eviction_policy should be "evict_last"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_res-0.1.9.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_res-0.1.9-py2.py3-none-any.whl (17.5 kB view details)

Uploaded Python 2Python 3

File details

Details for the file flash_attn_res-0.1.9.tar.gz.

File metadata

  • Download URL: flash_attn_res-0.1.9.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for flash_attn_res-0.1.9.tar.gz
Algorithm Hash digest
SHA256 2aa1c5288b51c83a14aeec4ffbbd1370d89b2a763fe8f8a3bf589be09bca0ccc
MD5 75bdcd27c93a99d9aa4f6989daa7df30
BLAKE2b-256 13632e082f6ee009e8801773e44236b6bd27cc66f52b72d65c9cc014e8341f14

See more details on using hashes here.

File details

Details for the file flash_attn_res-0.1.9-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for flash_attn_res-0.1.9-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 0509338b6a84f1a754b97133fe499e152d02b3751d6fac51014b4a21b230b09b
MD5 1e1c5ed9b402bef32f83b5b2ee82af8b
BLAKE2b-256 130bf7c95aab135cc6a7373fccf353bafaccdfc6dabf7b6505adbf2f6ec7af94

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page