Skip to main content

Engram-PEFT: Efficient Parameter-Efficient Fine-Tuning with Engram

Project description

Engram-PEFT

[English] | 中文

[!IMPORTANT] This is an unofficial implementation of the DeepSeek Engram paper (arXiv:2601.07372). DeepSeek-AI official demo is here.

License Documentation

Engram-PEFT is a high-performance, 100% paper-aligned implementation of the DeepSeek Engram architecture. It provides a Parameter-Efficient Fine-Tuning (PEFT) interface to inject conditional memory into any Transformer-based LLM.

Engram decouples static knowledge storage from dynamic reasoning using a sparse retrieval mechanism, allowing models to scale their factual memory without increasing inference FLOPs or interfering with core logic.


🚀 Quick Start

Installation

pip install engram-peft

To run examples or contribute to development, install the project with development dependencies:

# Using uv (recommended)
uv sync --all-groups

# Using pip
pip install -e ".[dev]"

5-Minute Example

from transformers import AutoModelForCausalLM, AutoTokenizer
from engram_peft import EngramConfig, get_engram_model

# 1. Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")

# 2. Inject Engram layers (aligned with arXiv:2601.07372)
config = EngramConfig(target_layers=[2, 11, 20])
model = get_engram_model(base_model, config, tokenizer)

# 3. Model is ready for training! 
# Only Engram parameters (approx 1% of total) are trainable.

📊 Performance Comparison

Method Params Added Grad. Update Size VRAM (1.1B)
FFT (Full Fine-Tune) 0 1,100M ~24GB (est.)
LoRA (r=16) 1.8M 1.8M ~5.1GB
Engram-PEFT 11.2M ~1.2M* ~6.8GB

* Engram employs sparse lookup; only a tiny fraction of parameters (approx. 1%) are active and receive gradient updates per step. For a detailed breakdown of VRAM usage and scaling, see our Memory Analysis.


🛠 Features

  • 100% Paper Alignment: Implements Appendix A Table 5 parameters and the official DeepSeek gating/hashing logic.
  • CPU-Side Precomputation: EngramDataCollator precomputes multi-head hashes on CPU, ensuring 100% GPU utilization.
  • Tokenizer Compression: Built-in NFKC and lowercase normalization to achieve 23% vocabulary reduction (consistent with paper).
  • Zero-Invasive: Injects via forward hooks; no modification to your base model architecture required.
  • Dynamic Switching: Load and swap "knowledge packs" at runtime without reloading the base model.

📖 Documentation

For full details, see our documentation:


🎯 Citation

If you use this implementation in your research, please cite the original DeepSeek paper:

@article{deepseek2026engram,
  title={Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models},
  author={DeepSeek-AI},
  journal={arXiv preprint arXiv:2601.07372},
  year={2026}
}

License

Apache License 2.0. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

engram_peft-1.0.3.tar.gz (34.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

engram_peft-1.0.3-py3-none-any.whl (26.6 kB view details)

Uploaded Python 3

File details

Details for the file engram_peft-1.0.3.tar.gz.

File metadata

  • Download URL: engram_peft-1.0.3.tar.gz
  • Upload date:
  • Size: 34.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.3.tar.gz
Algorithm Hash digest
SHA256 be856e009bc2b77bdef137e68e0578bfaf0faab9177c97386b98e9030adc8a6f
MD5 4ab994b76800a9f5f12836619de00454
BLAKE2b-256 f5565dd3863ea2e6948c3dc44b003164820545b54d567140161facad16e8ec16

See more details on using hashes here.

File details

Details for the file engram_peft-1.0.3-py3-none-any.whl.

File metadata

  • Download URL: engram_peft-1.0.3-py3-none-any.whl
  • Upload date:
  • Size: 26.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 29042a1b944d7eb943ca106ded65f72efb313fadee77e7a188c0b3dba89b9d71
MD5 1749defba0bd8882efeed6cf22dd35b7
BLAKE2b-256 07bcd9559c8ec32890db954169f00378e025b0700760b7f54bae639b638a338a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page