Fast Weight Attention
Project description
Fast Weight Attention (wip)
An attention based fast weight episodic memory, in the same vein as the memory MLP from TTT / Titans and fwPKM from Sakana AI
Install
$ pip install fast-weight-attention
Usage
import torch
from fast_weight_attention import FastWeightAttention
mem = FastWeightAttention(512, causal = True)
tokens = torch.randn(1, 64, 512)
past_mem = None
retrieved, next_mem = mem(tokens, past_mem = past_mem, return_next_memories = True)
retrieved, next_mem = mem(tokens, past_mem = next_mem, return_next_memories = True)
retrieved, next_mem = mem(tokens, past_mem = next_mem, return_next_memories = True)
assert retrieved.shape == tokens.shape
Citations
@article{zhang2026loger,
title = {LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory},
author = {Zhang, Junyi and Herrmann, Charles and Hur, Junhwa and Sun, Chen and Yang, Ming-Hsuan and Cole, Forrester and Darrell, Trevor and Sun, Deqing},
journal = {arXiv preprint arXiv:2603.03269},
year = {2026}
}
@misc{zhao2026fastweightproductkeymemory,
title = {Fast-weight Product Key Memory},
author = {Tianyu Zhao and Llion Jones},
year = {2026},
eprint = {2601.00671},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2601.00671},
}
@misc{jordan2024muon,
author = {Keller Jordan and Yuchen Jin and Vlado Boza and Jiacheng You and Franz Cesista and Laker Newhouse and Jeremy Bernstein},
title = {Muon: An optimizer for hidden layers in neural networks},
year = {2024},
url = {https://kellerjordan.github.io/posts/muon/}
}
@article{Yaghoubietal2026,
author = {Yaghoubi, Mohammad and Nieto-Posadas, Andres and Mosser, Coralie-Anne and Gisiger, Thomas and Wilson, Émmanuel and Williams, Sylvain and Brandon, Mark P.},
title = {Predictive coding of reward in the hippocampus},
journal = {Nature},
year = {2026},
doi = {10.1038/s41586-025-09958-0}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fast_weight_attention-0.0.4.tar.gz.
File metadata
- Download URL: fast_weight_attention-0.0.4.tar.gz
- Upload date:
- Size: 6.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2bc9165d480389d733150bea16b28714b23570d1b8c31b3aa3e1ced965af8e03
|
|
| MD5 |
1348ab2646238fd54a6eaa937349a06e
|
|
| BLAKE2b-256 |
ab69d6293a260417d99d96ba9c15477e83ff36c65cb6589872e280b4877c3ae3
|
File details
Details for the file fast_weight_attention-0.0.4-py3-none-any.whl.
File metadata
- Download URL: fast_weight_attention-0.0.4-py3-none-any.whl
- Upload date:
- Size: 6.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
08066583e8f6c7f7d3da8f5dfb6ddf9b46fd2c7fdf112f495f7c16b2991ea6b1
|
|
| MD5 |
c79b934a28f9651f3e1451fd5c3bbb62
|
|
| BLAKE2b-256 |
47be8fe5c77d332538f0496c07f7b9a9722e9a64ccf9d28752e87f04e35a7ea6
|