Native Sparse Attention
Project description
Native Sparse Attention
Implementation of the sparse attention pattern proposed by the Deepseek team in their Native Sparse Attention paper
This will be my last open sourced project under Meta
Appreciation
-
Phil Tillet for democratizing CUDA kernel hacking with Triton
-
Flex Attention for allowing for rapid prototyping
-
@Mr-Grin for the code review and pointing out an inaccuracy with the implementation
-
Eric Pasewark for submitting a simple transformer based compression network
-
@Mr-Grin for a pull request that fixes compression block hyperparameters
-
@StrongSpoon for a memory access guard
Install
$ pip install native-sparse-attention-pytorch
Usage
import torch
from native_sparse_attention_pytorch import SparseAttention
attn = SparseAttention(
dim = 512,
dim_head = 64,
heads = 8,
sliding_window_size = 2,
compress_block_size = 4,
compress_block_sliding_stride = 2,
selection_block_size = 4,
num_selected_blocks = 2
)
tokens = torch.randn(2, 31, 512)
attended = attn(tokens)
assert tokens.shape == attended.shape
Example
Enwik8 language modeling
$ pip install .[examples]
Then
$ python train.py
To record some of your experiments, just invoke wandb login first before modifying the training script
Citations
@inproceedings{Yuan2025NativeSA,
title = {Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention},
author = {Jingyang Yuan and Huazuo Gao and Damai Dai and Junyu Luo and Liang Zhao and Zhengyan Zhang and Zhenda Xie and Y. X. Wei and Lean Wang and Zhiping Xiao and Yuqing Wang and Chong Ruan and Ming Zhang and Wenfeng Liang and Wangding Zeng},
year = {2025},
url = {https://api.semanticscholar.org/CorpusID:276408911}
}
@inproceedings{Keles2022OnTC,
title = {On The Computational Complexity of Self-Attention},
author = {Feyza Duman Keles and Pruthuvi Maheshakya Wijewardena and Chinmay Hegde},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2022},
url = {https://api.semanticscholar.org/CorpusID:252198880}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file native_sparse_attention_pytorch-0.2.3.tar.gz.
File metadata
- Download URL: native_sparse_attention_pytorch-0.2.3.tar.gz
- Upload date:
- Size: 36.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7b9481bb52ee565d5098868c7345285e88342248915ff8e3e62f7fe945be2da0
|
|
| MD5 |
51e70f79b4ccb6bc37a4661e39683290
|
|
| BLAKE2b-256 |
faee1c0f8875883103492e6d22a80507c2e5a768ccfcb97c28c9cc9d9ebc7d2c
|
File details
Details for the file native_sparse_attention_pytorch-0.2.3-py3-none-any.whl.
File metadata
- Download URL: native_sparse_attention_pytorch-0.2.3-py3-none-any.whl
- Upload date:
- Size: 27.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3a9ac0ebb423594aef3b2a2557f22ce55588395b6b41f53b64c198e2f2fee341
|
|
| MD5 |
5e9818adc94a04c5237aadc7d21a74bd
|
|
| BLAKE2b-256 |
f0a11f6ed96f00cd306cf3e364b7f593a4d47fc5f5d2feef4d50e5b38c445dd3
|