Skip to main content

Flash Attention CUTE (CUDA Template Engine) implementation

Project description

FlashAttention-4 (CuTeDSL)

FlashAttention-4 is a CuTeDSL-based implementation of FlashAttention for Hopper and Blackwell GPUs.

Installation

pip install flash-attn-4

If you're on CUDA 13, install with the cu13 extra for best performance:

pip install "flash-attn-4[cu13]"

Usage

from flash_attn.cute import flash_attn_func, flash_attn_varlen_func

out = flash_attn_func(q, k, v, causal=True)

Development

git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
pip install -e "flash_attn/cute[dev]"       # CUDA 12.x
pip install -e "flash_attn/cute[dev,cu13]"  # CUDA 13.x (e.g. B200)
pytest tests/cute/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_4-4.0.0b8.tar.gz (213.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_4-4.0.0b8-py3-none-any.whl (231.3 kB view details)

Uploaded Python 3

File details

Details for the file flash_attn_4-4.0.0b8.tar.gz.

File metadata

  • Download URL: flash_attn_4-4.0.0b8.tar.gz
  • Upload date:
  • Size: 213.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for flash_attn_4-4.0.0b8.tar.gz
Algorithm Hash digest
SHA256 c1007ded9bfdb214715c56cdb35cf3eee096a7ec877f4963f00f1244c98d35ff
MD5 6d6dda1c4c9ab75a0c167e5c99af9daa
BLAKE2b-256 6a3d22e3a3f17a1d0b405c2f08788353c074b173af1ae359985d207826bff0bc

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b8.tar.gz:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file flash_attn_4-4.0.0b8-py3-none-any.whl.

File metadata

  • Download URL: flash_attn_4-4.0.0b8-py3-none-any.whl
  • Upload date:
  • Size: 231.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for flash_attn_4-4.0.0b8-py3-none-any.whl
Algorithm Hash digest
SHA256 bae222f60169fb411ce1fdd0d2e0f43f313d25f9f370c40a8b0edaf2743b3b62
MD5 4a1f3103c74913864b0da4b36d6a434c
BLAKE2b-256 4715a5bfe1e92c868331a16747543f1b937074a6b398caf630c4813336d70418

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b8-py3-none-any.whl:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page