Skip to main content

Flash Attention CUTE (CUDA Template Engine) implementation

Project description

FlashAttention-4 (CuTeDSL)

FlashAttention-4 is a CuTeDSL-based implementation of FlashAttention for Hopper and Blackwell GPUs.

Installation

pip install flash-attn-4

Usage

from flash_attn.cute import flash_attn_func, flash_attn_varlen_func

out = flash_attn_func(q, k, v, causal=True)

Development

git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
pip install -e "flash_attn/cute[dev]"
pytest tests/cute/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_4-4.0.0b4.tar.gz (186.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_4-4.0.0b4-py3-none-any.whl (202.0 kB view details)

Uploaded Python 3

File details

Details for the file flash_attn_4-4.0.0b4.tar.gz.

File metadata

  • Download URL: flash_attn_4-4.0.0b4.tar.gz
  • Upload date:
  • Size: 186.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for flash_attn_4-4.0.0b4.tar.gz
Algorithm Hash digest
SHA256 00177ff0a09da33b6838ae292b87db6574daee8db98a3800f2aa9da60ebfc904
MD5 0bee1679bcda5b40d22216868ff34d53
BLAKE2b-256 aaa45ef0725e72c803b3dc9fe2dcd1e2e1b06712b7ca69bcaababae3d7f078d0

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b4.tar.gz:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file flash_attn_4-4.0.0b4-py3-none-any.whl.

File metadata

  • Download URL: flash_attn_4-4.0.0b4-py3-none-any.whl
  • Upload date:
  • Size: 202.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for flash_attn_4-4.0.0b4-py3-none-any.whl
Algorithm Hash digest
SHA256 daaec7e76e9aed0bbc717228b4b8461387a99fa71629fb34a8dd00fc56e291e4
MD5 312ac8b1e0897b40fbe1986e3bdcaa24
BLAKE2b-256 738eb27eed4f2dcb9e5a89719e2233be36e86faa64fd1bead802265c7a7b046c

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b4-py3-none-any.whl:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page