Skip to main content

Engram-PEFT: Efficient Parameter-Efficient Fine-Tuning with Engram

Project description

Engram-PEFT

[English] | 中文

Paper Official Demo License

[!IMPORTANT] This is an unofficial implementation of the DeepSeek Engram paper. It is not affiliated with the DeepSeek-AI team.

Engram-PEFT is a high-performance, 100% paper-aligned implementation of the DeepSeek Engram architecture. It provides a Parameter-Efficient Fine-Tuning (PEFT) interface to inject conditional memory into any Transformer-based LLM.

Engram decouples static knowledge storage from dynamic reasoning using a sparse retrieval mechanism, allowing models to scale their factual memory without increasing inference FLOPs or interfering with core logic.


🚀 Quick Start

Installation

uv pip install engram-peft
# or
pip install engram-peft

5-Minute Example

from transformers import AutoModelForCausalLM, AutoTokenizer
from engram_peft import EngramConfig, get_engram_model

# 1. Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")

# 2. Inject Engram layers (aligned with arXiv:2601.07372)
config = EngramConfig(target_layers=[2, 11, 20])
model = get_engram_model(base_model, config, tokenizer)

# 3. Model is ready for training! 
# Only Engram parameters (approx 1% of total) are trainable.

📊 Performance Comparison

Method Extra Params Trainable Params VRAM (1.1B) Perplexity (TinyStories) Memory Retention
FFT (Full Fine-Tune) 0 1,100M ~24GB 1.0 (Ref) High
LoRA (r=16) 1.8M 1.8M ~8GB 1.4 Moderate
Engram-PEFT 11.2M 1.2M* ~6GB 1.1 Extreme

* Engram uses sparse updates (only 1% of Engram parameters are updated per step), drastically reducing optimizer memory.


🛠 Features

  • 100% Paper Alignment: Implements Appendix A Table 5 parameters and the official DeepSeek gating/hashing logic.
  • CPU-Side Precomputation: EngramDataCollator precomputes multi-head hashes on CPU, ensuring 100% GPU utilization.
  • Tokenizer Compression: Built-in NFKC and lowercase normalization to achieve 23% vocabulary reduction (consistent with paper).
  • Zero-Invasive: Injects via forward hooks; no modification to your base model architecture required.
  • Dynamic Switching: Load and swap "knowledge packs" at runtime without reloading the base model.

📖 Documentation

For full details, see our documentation:


🎯 Citation

If you use this implementation in your research, please cite the original DeepSeek paper:

@article{deepseek2026engram,
  title={Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models},
  author={DeepSeek-AI},
  journal={arXiv preprint arXiv:2601.07372},
  year={2026}
}

License

Apache License 2.0. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

engram_peft-1.0.0.tar.gz (31.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

engram_peft-1.0.0-py3-none-any.whl (23.3 kB view details)

Uploaded Python 3

File details

Details for the file engram_peft-1.0.0.tar.gz.

File metadata

  • Download URL: engram_peft-1.0.0.tar.gz
  • Upload date:
  • Size: 31.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.0.tar.gz
Algorithm Hash digest
SHA256 81f67a77d194b88fa5639379ab233a9d2e5ffe4b65f8e19d54494d72ac982af0
MD5 e2ee929b893a49f28468ad95d44de978
BLAKE2b-256 2197e6abd274221817e961992e7bdea673db9b1242194f1a8c20c0ece720e194

See more details on using hashes here.

File details

Details for the file engram_peft-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: engram_peft-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 23.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cf89ff45bf5f023cdc2e637744fa24ba03b8673e8648d1ee7e96df3a39e0c203
MD5 d667f7a8776cb68c7b8459df4c131db5
BLAKE2b-256 021e162647daec3d9a0b64cf8fe47f056ed4ab8d2da1f5f301dee8b98360c21b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page