Skip to main content

Attention Residuals (AttnRes) kernels

Project description

Flash Attention Residuals

1.4x faster inference/training vs. an optimized torch.compile impl. of the paper’s two-phase batched attention with online softmax

20% reduction in training memory (without activation checkpointing)*

*Benchmarked on H100. Dependent on problem size and setup.

Credits:

Thanks to Mohamed Osman (https://github.com/spaghettiSystems) and Cartesia for advising on and supporting the development of this kernel.

Install

pip install flash-attn-res

Roadmap:

  • Proper backward eval
  • Implement in CuTE and CUDA
  • Tune precision
  • Mixed FP16 and BF16 and store quantization scale
  • Stochastic rounding
  • Make into Python package

Insights:

  • Normalizing in phase 1 keeps outputs bounded (convex combination of values) so bf16 error doesn't scale with softmax flatness. Phase 2 computes in fp32, and the reduction algebra matches split-KV Flash Attention.
  • Certain dimensions, especially NUM_QUERIES_PER_BLOCK, are small so semi-elementwise (B, T) kernel with static_range is better than doing tl.dot
  • Kernel is memory bound and doing semi-elementwise allows for kernel fusion
  • NUM_SOURCE_BLOCKS and NUM_QUERIES_PER_BLOCK should be autotuning keys, unlike with torch.compile, which allows for faster kernels
  • Small NUM_QUERIES_PER_BLOCK so eviction_policy should be "evict_last"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_res-0.1.5.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_res-0.1.5-py2.py3-none-any.whl (15.6 kB view details)

Uploaded Python 2Python 3

File details

Details for the file flash_attn_res-0.1.5.tar.gz.

File metadata

  • Download URL: flash_attn_res-0.1.5.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for flash_attn_res-0.1.5.tar.gz
Algorithm Hash digest
SHA256 cca841c5b5e17608429fc72fb484fb1764bb981858e09d9ad4f8652f622816ec
MD5 bac7ad18ab9222c1577fc594cff934dc
BLAKE2b-256 377c4d3aced4294ab2505ead2fcadd68fc91093af987e90f3f723a6c0703e884

See more details on using hashes here.

File details

Details for the file flash_attn_res-0.1.5-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for flash_attn_res-0.1.5-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 126498255719960856970cd1d066e4000a7f3e6fe26cf03244caa373dcebcf86
MD5 10ceb79d1a926b575799377d6e18a0a1
BLAKE2b-256 909eb3f9df17c4bcffd3ab06dce8d37aa8dfee8a12d697a174aaf9b9cfdb06ac

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page