Skip to main content

Engram-PEFT: Efficient Parameter-Efficient Fine-Tuning with Engram

Project description

Engram-PEFT

[English] | 中文

[!IMPORTANT] This is an unofficial implementation of the DeepSeek Engram paper (arXiv:2601.07372). DeepSeek-AI official demo is here.

License Documentation

Engram-PEFT is a high-performance, 100% paper-aligned implementation of the DeepSeek Engram architecture. It provides a Parameter-Efficient Fine-Tuning (PEFT) interface to inject conditional memory into any Transformer-based LLM.

Engram decouples static knowledge storage from dynamic reasoning using a sparse retrieval mechanism, allowing models to scale their factual memory without increasing inference FLOPs or interfering with core logic.


🚀 Quick Start

Installation

pip install engram-peft

To run examples or contribute to development, install the project with development dependencies:

# Using uv (recommended)
uv sync --all-groups

# Using pip
pip install -e ".[dev]"

5-Minute Example

from transformers import AutoModelForCausalLM, AutoTokenizer
from engram_peft import EngramConfig, get_engram_model

# 1. Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T")

# 2. Inject Engram layers (aligned with arXiv:2601.07372)
config = EngramConfig(target_layers=[2, 11, 20])
model = get_engram_model(base_model, config, tokenizer)

# 3. Quick check on trainable parameters
model.print_trainable_parameters()
# trainable params: 86,938,368 || all params: 1,186,986,752 || trainable%: 7.3243

📊 Performance Comparison

Method Params Added Speed (s/step) Training Loss Eval Loss Peak Memory (JSON)
LoRA (r=16) ~2.25 M 0.2738 s 1.231 0.9890 8.07 GB
Engram-PEFT 545.4 M 0.2961 s 1.263 1.0165 9.38 GB
LoRA+Engram ~547.7 M 0.3360 s 1.214 0.9656 10.33 GB

[!TIP] Performance Insight: In our latest benchmark (Test 8 & 9, TinyLlama-1.1B, 3000 steps), LoRA+Engram achieved the best convergence (lowest eval loss), outperforming standalone LoRA by ~2.3%. Engram-PEFT provides 240x more parameter capacity (545M) for knowledge storage with minimal latency penalty. Use LoRA+Engram to leverage both structural adaptation and high-capacity sparse memory.

Loss Curve Comparison

Loss Curve Comparison

* Engram employs sparse lookup; only a tiny fraction of parameters (approx. 1%) are active and receive gradient updates per step. For a detailed breakdown of performance, computation, and memory, see our Performance Analysis.


🛠 Features

  • 100% Paper Alignment: Implements Appendix A Table 5 parameters and the official DeepSeek gating/hashing logic.
  • CPU Prefetching & Precomputation: EngramDataCollator pre-calculates multi-head hash indices on the CPU. By using num_workers > 0, these indices are prefetched in parallel with training, ensuring zero hashing overhead on the GPU.
  • Tokenizer Compression: Built-in NFKC and lowercase normalization for 23% vocabulary reduction.
  • Cross-Model Weight Migration: A unique feature (see weight_transfer.py) that allows migrating Engram weights between different models (e.g., Llama to Qwen) using character-level alignment on a corpus—effectively "recycling" learned knowledge.
  • Zero-Invasive: Injects via forward hooks; no modification to your base model architecture required.
  • Peft-like API: Familiar methods like print_trainable_parameters() and save_pretrained().
  • Combined Training (LoRA+Engram): Support for stacking adapters. Injects LoRA for structural fine-tuning and Engram for sparse knowledge retrieval in a single model.
  • Named Adapters: Industry-standard named adapter management (add/set/unload) for modular knowledge packs.
  • Automated Training: Native EngramTrainer with built-in sparse Adam support and automatic sync of optimizer hyperparameters.
  • Flexible Layer Discovery: Recursive logic to find transformer layers regardless of PEFT wrapper nesting.

📖 Documentation

For full details, see our documentation:


🎯 Citation

If you use this implementation in your research, please cite the original DeepSeek paper:

@article{deepseek2026engram,
  title={Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models},
  author={DeepSeek-AI},
  journal={arXiv preprint arXiv:2601.07372},
  year={2026}
}

License

Apache License 2.0. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

engram_peft-1.0.5.tar.gz (44.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

engram_peft-1.0.5-py3-none-any.whl (35.4 kB view details)

Uploaded Python 3

File details

Details for the file engram_peft-1.0.5.tar.gz.

File metadata

  • Download URL: engram_peft-1.0.5.tar.gz
  • Upload date:
  • Size: 44.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.5.tar.gz
Algorithm Hash digest
SHA256 5eb630699458ef7e46ef400b231109d24a460d3a522e511dfaf622edd3075626
MD5 22d737c3ff4b4f208b823c1e5a352864
BLAKE2b-256 86a8f4eda06625c7ac1d1c4298de23247c683cd3bbd7ee7a309c6bd4b3913530

See more details on using hashes here.

File details

Details for the file engram_peft-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: engram_peft-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 35.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_peft-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 22b687cef40a2dccde452835bccab6cd068c6d30ee1245e9a94b6a2346200f35
MD5 b61fa3fb2e869f1cc66f7895fb0ba83a
BLAKE2b-256 a0f60aea3f73891cfebd7699a2fbbf39ee2c530420004e90e113b6a79f7dd73c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page