Skip to main content

Flash Attention CUTE (CUDA Template Engine) implementation

Project description

FlashAttention-4 (CuTeDSL)

FlashAttention-4 is a CuTeDSL-based implementation of FlashAttention for Hopper and Blackwell GPUs.

Installation

pip install flash-attn-4

If you're on CUDA 13, install with the cu13 extra for best performance:

pip install "flash-attn-4[cu13]"

Usage

from flash_attn.cute import flash_attn_func, flash_attn_varlen_func

out = flash_attn_func(q, k, v, causal=True)

Development

git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
pip install -e "flash_attn/cute[dev]"
pytest tests/cute/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_4-4.0.0b7.tar.gz (213.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_4-4.0.0b7-py3-none-any.whl (231.4 kB view details)

Uploaded Python 3

File details

Details for the file flash_attn_4-4.0.0b7.tar.gz.

File metadata

  • Download URL: flash_attn_4-4.0.0b7.tar.gz
  • Upload date:
  • Size: 213.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for flash_attn_4-4.0.0b7.tar.gz
Algorithm Hash digest
SHA256 882d8db1615f1becd62d23941e6ee259b9496896d22000bc67b42f89eae24eac
MD5 15e19d733e64eb86b84fdc293584b285
BLAKE2b-256 8e8da074d34ab7d5b82c4ccd6018543ee410101b00bc5c0e6ced78625f680d2b

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b7.tar.gz:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file flash_attn_4-4.0.0b7-py3-none-any.whl.

File metadata

  • Download URL: flash_attn_4-4.0.0b7-py3-none-any.whl
  • Upload date:
  • Size: 231.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for flash_attn_4-4.0.0b7-py3-none-any.whl
Algorithm Hash digest
SHA256 c980696efe7f3572c8bdcd8e45a27451305401a6c775e1498e3f04427c441d6d
MD5 f4b307ce7f84ab465972db4d90f3d4ad
BLAKE2b-256 43ae2fc8645593f12bce760c944d5545ce11b8f45a817421a8ea4e67fe313c5b

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b7-py3-none-any.whl:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page