Block Sparse Attention with Block Retrieval
Project description
BSBR: Block Sparse Attention with Block Retrieval
A PyTorch implementation of Block Sparse Attention with Block Retrieval (BSBR), a novel attention mechanism for efficient processing of long sequences. This implementation is inspired by Shengding Hu's blog post Streaming models for efficient long-context reasoning. [1]
Features
- Efficient processing of long sequences by combining:
- Standard attention within chunks
- Block retrieval between chunks
- Configurable chunk size
- Optional state compression
- Memory efficient with linear complexity in sequence length
Read our analysis of the BSBR compared against other models here
Implemented Transformer Architectures
The repository includes implementations of several efficient transformer architectures:
- BSBR (Block Sparse with Block Retrieval): Our core implementation with chunk-based attention and efficient block retrieval
- Standard Transformer: The classic self-attention mechanism with O(n²) complexity
- Linear Transformer: Removes softmax for O(n) complexity using associative property of matrix multiplication
- DeltaNet: Enhanced Linear Transformer with a removal component for better memory management
- Sliding Window Transformer: Restricts attention to a fixed window size for O(n·w) complexity
- Hopfield Network: Memory-based attention inspired by modern Hopfield Networks
- GAU (Gated Attention Unit): Chunk-based parallel attention with gating mechanisms
Installation
# Clone the repository
git clone https://github.com/yourusername/bsbr.git
cd bsbr
# Install the core package
pip install -e .
# Install with extras for evaluations and research
pip install -e ".[extras]"
Usage
Here's a simple example of how to use the BSBR model:
import torch
from bsbr import BSBRModel
# Model configuration
model = BSBRModel(
vocab_size=10000,
hidden_dim=512,
num_layers=4,
num_heads=8,
chunk_size=128,
ff_dim=2048,
dropout=0.1,
compression_factor=4 # Optional compression
)
# Input data
input_ids = torch.randint(0, 10000, (2, 256))
attention_mask = torch.ones(2, 256)
# Forward pass
outputs = model(input_ids, attention_mask)
Components
Core Model
- BSBRAttention: The core attention mechanism
- BSBRLayer: A complete transformer layer with BSBR attention and feed-forward network
- BSBRModel: A full model with embedding, multiple BSBR layers, and normalization
Additional Models (Extras)
For evaluation and research purposes, we also include several alternative attention architectures:
- Standard Transformer: Classic transformer with full attention (baseline)
- Linear Transformer: Linear complexity transformer using a reformulated attention mechanism
- DeltaNet: Enhanced linear transformer with a removal component
- Sliding Window Transformer: Efficient attention with a fixed context window
- Hopfield Network: Associative memory-based attention for pattern completion
- GAU: Gated Attention Unit with chunk-based parallelism
These additional models are available in the bsbr_extras package and can be installed with the extras option.
Evaluation
The repository includes tools to evaluate and compare different architectures:
# Run comparison of all models
python evals/compare_models.py --seq_lengths 64 128 256 512 1024
# Compare specific models
python evals/compare_models.py --models BSBR Linear Hopfield GAU
# Analyze results
python evals/analyze_results.py --use_example_data
Evaluations include:
- Inference time
- Memory usage
- Parameter counts
- Computational complexity analysis
Algorithm
BSBR works by combining two types of attention:
-
Within-chunk attention: Standard attention with softmax
softmax(QK^T · M_in)V -
Between-chunk attention: Block retrieval using meta queries and keys
Q ⊙ softmax(RH^T · M_out)F
Where:
- Q, K, V: Query, Key, Value matrices
- R, H: Meta queries and keys for chunk-level attention
- F: State vectors (flattened K^T·V for each chunk)
- M_in: Block diagonal mask
- M_out: Causal mask for chunk-level attention
License
This project is licensed under the MIT License - see the LICENSE file for details.
References
- Hu, S. (2025). Streaming models for efficient long-context reasoning. arXiv preprint arXiv:2403.xxxxx. https://shengdinghu.github.io/blogs/streaming_model/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bsbr-0.1.1.tar.gz.
File metadata
- Download URL: bsbr-0.1.1.tar.gz
- Upload date:
- Size: 4.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1c7f6ce5297d6c1cd3c053a19031b479841e5df31596dc195180221b73bbe404
|
|
| MD5 |
5f5e5cfd91fc7c851fbb86f3d1eaa9bc
|
|
| BLAKE2b-256 |
844242b5bf3828209dcf22f4c69141f81ce5abbb854f3bc50842176486ccc16b
|
File details
Details for the file bsbr-0.1.1-py3-none-any.whl.
File metadata
- Download URL: bsbr-0.1.1-py3-none-any.whl
- Upload date:
- Size: 8.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c4942401bc5cbe925dcf3e8fc4bdfcff7aaa18728d7475181e2041aef328a397
|
|
| MD5 |
2fcfcf35670a57b130b049bd5f622677
|
|
| BLAKE2b-256 |
914fb7fd7bafb5cd1d928d85782cd804f85f65f57c23bb56826462ae92059c05
|