Skip to main content

Fast Weight Attention

Project description

Fast Weight Attention (wip)

An attention based fast weight episodic memory, in the same vein as the memory MLP from TTT / Titans and fwPKM from Sakana AI

Install

$ pip install fast-weight-attention

Usage

import torch
from fast_weight_attention import FastWeightAttention

mem = FastWeightAttention(512, causal = True)

tokens = torch.randn(1, 64, 512)

past_mem = None

retrieved, next_mem = mem(tokens, past_mem = past_mem, return_next_memories = True)
retrieved, next_mem = mem(tokens, past_mem = next_mem, return_next_memories = True)
retrieved, next_mem = mem(tokens, past_mem = next_mem, return_next_memories = True)

assert retrieved.shape == tokens.shape

Citations

@article{zhang2026loger,
    title   = {LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory},
    author  = {Zhang, Junyi and Herrmann, Charles and Hur, Junhwa and Sun, Chen and Yang, Ming-Hsuan and Cole, Forrester and Darrell, Trevor and Sun, Deqing},
    journal = {arXiv preprint arXiv:2603.03269},
    year    = {2026}
}
@misc{zhao2026fastweightproductkeymemory,
    title   = {Fast-weight Product Key Memory},
    author  = {Tianyu Zhao and Llion Jones},
    year    = {2026},
    eprint  = {2601.00671},
    archivePrefix = {arXiv},
    primaryClass = {cs.CL},
    url     = {https://arxiv.org/abs/2601.00671},
}
@misc{jordan2024muon,
    author  = {Keller Jordan and Yuchen Jin and Vlado Boza and Jiacheng You and Franz Cesista and Laker Newhouse and Jeremy Bernstein},
    title   = {Muon: An optimizer for hidden layers in neural networks},
    year    = {2024},
    url     = {https://kellerjordan.github.io/posts/muon/}
}
@article{Yaghoubietal2026,
    author  = {Yaghoubi, Mohammad and Nieto-Posadas, Andres and Mosser, Coralie-Anne and Gisiger, Thomas and Wilson, Émmanuel and Williams, Sylvain and Brandon, Mark P.},
    title   = {Predictive coding of reward in the hippocampus},
    journal = {Nature},
    year    = {2026},
    doi     = {10.1038/s41586-025-09958-0}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fast_weight_attention-0.0.5.tar.gz (6.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fast_weight_attention-0.0.5-py3-none-any.whl (6.0 kB view details)

Uploaded Python 3

File details

Details for the file fast_weight_attention-0.0.5.tar.gz.

File metadata

File hashes

Hashes for fast_weight_attention-0.0.5.tar.gz
Algorithm Hash digest
SHA256 61af0bbaa346e51dc9b2b9aaa4011707b73fe9cc1176ac1ab8ccae0f7c70792d
MD5 76e46dd8a255be17cde0da7031c1774f
BLAKE2b-256 eea96a82fe96ed316df283387d9d0e45c4b9d6fe3499ed5d3984182aedf56467

See more details on using hashes here.

File details

Details for the file fast_weight_attention-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for fast_weight_attention-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 2abccee9eed3003a17dc90a74ca2e285e16a0ce7a5c62c9808ff8e6a631c558d
MD5 44379803c7dad1d75afad9b22ead3fa9
BLAKE2b-256 8df05d1362394dce6626d51e8e21672d0de2d954852ab2672e994b2a7b5251bc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page