Skip to main content

Engram-PEFT: Efficient Parameter-Efficient Fine-Tuning with Engram

Project description

Engram-PEFT

[English] | 中文

Paper Official Demo License Documentation

[!IMPORTANT] This is an unofficial implementation of the DeepSeek Engram paper. It is not affiliated with the DeepSeek-AI team.

Engram-PEFT is a high-performance, 100% paper-aligned implementation of the DeepSeek Engram architecture. It provides a Parameter-Efficient Fine-Tuning (PEFT) interface to inject conditional memory into any Transformer-based LLM.

Engram decouples static knowledge storage from dynamic reasoning using a sparse retrieval mechanism, allowing models to scale their factual memory without increasing inference FLOPs or interfering with core logic.


🚀 Quick Start

Installation

uv pip install engram-peft
# or
pip install engram-peft

5-Minute Example

from transformers import AutoModelForCausalLM, AutoTokenizer
from engram_peft import EngramConfig, get_engram_model

# 1. Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")

# 2. Inject Engram layers (aligned with arXiv:2601.07372)
config = EngramConfig(target_layers=[2, 11, 20])
model = get_engram_model(base_model, config, tokenizer)

# 3. Model is ready for training! 
# Only Engram parameters (approx 1% of total) are trainable.

📊 Performance Comparison

Method Extra Params Trainable Params VRAM (1.1B) Perplexity (TinyStories) Memory Retention
FFT (Full Fine-Tune) 0 1,100M ~24GB 1.0 (Ref) High
LoRA (r=16) 1.8M 1.8M ~8GB 1.4 Moderate
Engram-PEFT 11.2M 1.2M* ~6GB 1.1 Extreme

* Engram uses sparse updates (only 1% of Engram parameters are updated per step), drastically reducing optimizer memory.


🛠 Features

  • 100% Paper Alignment: Implements Appendix A Table 5 parameters and the official DeepSeek gating/hashing logic.
  • CPU-Side Precomputation: EngramDataCollator precomputes multi-head hashes on CPU, ensuring 100% GPU utilization.
  • Tokenizer Compression: Built-in NFKC and lowercase normalization to achieve 23% vocabulary reduction (consistent with paper).
  • Zero-Invasive: Injects via forward hooks; no modification to your base model architecture required.
  • Dynamic Switching: Load and swap "knowledge packs" at runtime without reloading the base model.

📖 Documentation

For full details, see our documentation:


🎯 Citation

If you use this implementation in your research, please cite the original DeepSeek paper:

@article{deepseek2026engram,
  title={Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models},
  author={DeepSeek-AI},
  journal={arXiv preprint arXiv:2601.07372},
  year={2026}
}

License

Apache License 2.0. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

engram_peft-1.0.1.tar.gz (31.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

engram_peft-1.0.1-py3-none-any.whl (23.4 kB view details)

Uploaded Python 3

File details

Details for the file engram_peft-1.0.1.tar.gz.

File metadata

  • Download URL: engram_peft-1.0.1.tar.gz
  • Upload date:
  • Size: 31.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.1.tar.gz
Algorithm Hash digest
SHA256 6be859ea6cf897a4cc79e5fbebd9328a730d878943f2d6906b06a9c74ee6a2ef
MD5 82e641271f5fac9ec303de5a34e146c4
BLAKE2b-256 abbb964725562485bb8b2e081ea39a99222bfda7f96150c4e93d989801065ef6

See more details on using hashes here.

File details

Details for the file engram_peft-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: engram_peft-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 23.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2ba101cd5d1863aadeda5f5df646877cb7b542ea17804f81b2feb144315e9e89
MD5 0ce6faf18b275cc3ee38e9d23a6a1647
BLAKE2b-256 2e2c040f459fbc77624fba4dfe4412661f5fed3376917d541656ba7534457f5f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page