Skip to main content

Flash Attention CUTE (CUDA Template Engine) implementation

Project description

FlashAttention-4 (CuTeDSL)

FlashAttention-4 is a CuTeDSL-based implementation of FlashAttention for Hopper and Blackwell GPUs.

Installation

pip install flash-attn-4

Usage

from flash_attn.cute import flash_attn_func, flash_attn_varlen_func

out = flash_attn_func(q, k, v, causal=True)

Development

git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
pip install -e "flash_attn/cute[dev]"
pytest tests/cute/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_attn_4-4.0.0b5.tar.gz (209.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flash_attn_4-4.0.0b5-py3-none-any.whl (226.9 kB view details)

Uploaded Python 3

File details

Details for the file flash_attn_4-4.0.0b5.tar.gz.

File metadata

  • Download URL: flash_attn_4-4.0.0b5.tar.gz
  • Upload date:
  • Size: 209.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for flash_attn_4-4.0.0b5.tar.gz
Algorithm Hash digest
SHA256 c4ae80159350137d82b1ee0870bcec7dda237d9ab940ee002d5662c1d5f27230
MD5 835f479af55180b003c81ed29b10e651
BLAKE2b-256 7a375b0128141da6faf03ec0cee1fc30061a3f50a3a4fcf6bc2a8bafa32e4cd2

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b5.tar.gz:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file flash_attn_4-4.0.0b5-py3-none-any.whl.

File metadata

  • Download URL: flash_attn_4-4.0.0b5-py3-none-any.whl
  • Upload date:
  • Size: 226.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for flash_attn_4-4.0.0b5-py3-none-any.whl
Algorithm Hash digest
SHA256 5239d748700ed7cf08d5703b4bb8ccb3fe26d23d12bb34fc67b694d53f8c2ecc
MD5 054a294c034057142b9c4776f74eb140
BLAKE2b-256 24f701ee2576ce41f9884d291ee21861ef194afc0b2b1ce3bd175fc7a6e1b133

See more details on using hashes here.

Provenance

The following attestation bundles were made for flash_attn_4-4.0.0b5-py3-none-any.whl:

Publisher: publish-fa4.yml on Dao-AILab/flash-attention

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page