Skip to main content

High-recall conversational memory retrieval. 98% R@5 on LongMemEval, 90% on LoCoMo — no LLM required. Local-first, cloud-ready.

Project description

Engram logo

Engram

High-recall conversational memory retrieval. Local-first, cloud-ready.

PyPI CI License Python


Benchmark Results

Tested on two major benchmarks — no LLM required, zero cost per query.

LongMemEval (500 questions)

Metric Score
R@5 98.4% (492/500)
R@10 99.4%
NDCG@5 0.934
Question Type R@5
knowledge-update 98.7%
multi-session 99.2%
single-session-assistant 100.0%
single-session-user 100.0%
temporal-reasoning 97.0%
single-session-preference 93.3%

LoCoMo (1982 questions, 10 conversations)

Metric Score
R@5 89.6% (1776/1982)
R@10 92.6%
NDCG@5 0.829
Category R@5 R@10
Single-hop (factual) 85.8% 91.8%
Temporal (dates) 89.4% 92.5%
Multi-hop (inference) 63.0% 70.7%
Contextual (details) 93.3% 95.6%
Adversarial (speaker) 90.6% 92.2%

Reported with --mode rerank (chunking + cross-encoder reranker).

What It Does

Engram stores conversation history and retrieves it with state-of-the-art accuracy. It uses a three-stage retrieval pipeline — dense embeddings, sparse keyword matching, and cross-encoder reranking — to achieve higher recall than systems relying on LLM-based extraction or summarization.

Nothing is summarized. Nothing is paraphrased. Your exact words are stored and returned.

How It Compares

LoCoMo — Zero-LLM Memory Systems

System LoCoMo Accuracy LLM Required
EverMemOS 92.3% Yes (cloud)
Engram 89.6% No
Hindsight 89.6% Yes (cloud)
Zep ~85% Yes (cloud)
Letta / MemGPT ~83.2% Yes (cloud)
SLM V3 (zero-cloud) 74.8% No
Supermemory ~70% Yes
Mem0 (independent) ~58% Yes

Engram is the top-performing zero-LLM system on LoCoMo — matching paid cloud-LLM services like Hindsight at $0/query.

LongMemEval

Engram MemPalace Mem0
R@5 (LongMemEval) 98.4% 96.6%
Embedding model bge-large (1024d) all-MiniLM (384d) Varies
Sparse retrieval BM25 + RRF fusion Ad-hoc keyword overlap N/A
Reranking Cross-encoder (free) LLM call ($0.001/q) N/A
Indexing User + assistant + preference docs User turns only LLM-extracted facts
Cloud deployment Qdrant backend No Yes
LLM required No No (optional rerank) Yes

Install

pip install engram-search

Optional extras:

# With cloud backend (Qdrant)
pip install engram-search[cloud]

# With cross-encoder reranker
pip install engram-search[rerank]

# Everything (dev + cloud + rerank)
pip install engram-search[all]

Quickstart — CLI

# Initialize a memory store
engram init ./my_memories

# Ingest conversations
engram ingest conversations.json --store ./my_memories

# Search
engram search "why did we switch to GraphQL" --store ./my_memories

Quickstart — Python API

from engram.backends.faiss_backend import FaissBackend
from engram.backends.base import Document
from engram.ingestion.parser import session_to_documents
from engram.retrieval.embedder import Embedder
from engram.retrieval.pipeline import RetrievalPipeline

# Initialize
embedder = Embedder("bge-large")
backend = FaissBackend(path="./my_memories", dimension=1024)
pipeline = RetrievalPipeline(embedder=embedder)

# Ingest a conversation
turns = [
    {"role": "user", "content": "I'm switching our API from REST to GraphQL."},
    {"role": "assistant", "content": "What's driving the switch?"},
    {"role": "user", "content": "Too many round trips. Our mobile app makes 12 calls per screen."},
]
docs = session_to_documents(turns, session_id="session_1", timestamp="2025-01-15")
texts = [d["text"] for d in docs]
embeddings = embedder.encode_documents(texts)

documents = [
    Document(id=d["id"], text=d["text"], embedding=e.tolist(), metadata=d["metadata"])
    for d, e in zip(docs, embeddings)
]
backend.add(documents)

# Search
results = pipeline.search("why did we switch to GraphQL", documents=documents, top_k=3)
for r in results:
    print(r.text)

Quickstart — Cloud Mode

# Set up Qdrant (managed or self-hosted)
export ENGRAM_BACKEND=qdrant
export ENGRAM_QDRANT_URL=https://your-cluster.qdrant.io:6333
export ENGRAM_QDRANT_API_KEY=your-api-key

# Start the API server
pip install fastapi uvicorn
uvicorn engram.server:app --host 0.0.0.0 --port 8000

API Endpoints

Method Endpoint Description
POST /ingest Add conversations
POST /search Search memories
GET /health Health check
GET /stats Store statistics

Examples

Check out the interactive notebooks in examples/:

Notebook Description
Getting Started Ingest conversations, search memories, understand hybrid retrieval
Customer Support Build a support agent with full customer history recall
Personal Assistant AI assistant with long-term memory across conversations

Docker

# Local mode
docker compose up

# Or build and run directly
docker build -t engram .
docker run -p 8000:8000 -v engram_data:/data engram

Architecture

┌─────────────────────────────────────────────────────────────┐
│                        Engram                               │
│                                                             │
│  ┌────────────┐  ┌─────────────┐  ┌───────────────────┐    │
│  │ Ingestion  │  │   Index     │  │    Retrieval      │    │
│  │            │→ │             │→ │                   │    │
│  │ user+asst  │  │ FAISS (local│  │ 1. Dense (bi-enc) │    │
│  │ turns      │  │  or Qdrant  │  │ 2. BM25 (sparse)  │    │
│  │ preference │  │ (cloud)     │  │ 3. RRF fusion     │    │
│  │ extraction │  │             │  │ 4. Cross-encoder   │    │
│  └────────────┘  └─────────────┘  └───────────────────┘    │
│                                                             │
│  Local: FAISS + SQLite    Cloud: Qdrant + REST API          │
└─────────────────────────────────────────────────────────────┘

Run Benchmarks

LongMemEval

# Download dataset
curl -fsSL -o data/longmemeval_s_cleaned.json \
  https://huggingface.co/datasets/xiaowu0162/longmemeval-cleaned/resolve/main/longmemeval_s_cleaned.json

pip install engram-search[all]

python benchmarks/longmemeval_bench.py data/longmemeval_s_cleaned.json --mode hybrid

LoCoMo

# Download dataset (from Snap Research)
curl -fsSL -o data/locomo10.json \
  https://raw.githubusercontent.com/snap-research/locomo/main/data/locomo10.json

python benchmarks/locomo_bench.py data/locomo10.json --mode rerank

Requirements

  • Python 3.9+
  • ~1.3 GB disk for bge-large embedding model (downloaded on first use)
  • No API keys required for local mode

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

engram_search-0.1.2.tar.gz (181.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

engram_search-0.1.2-py3-none-any.whl (27.8 kB view details)

Uploaded Python 3

File details

Details for the file engram_search-0.1.2.tar.gz.

File metadata

  • Download URL: engram_search-0.1.2.tar.gz
  • Upload date:
  • Size: 181.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_search-0.1.2.tar.gz
Algorithm Hash digest
SHA256 d26bcc8c7099c0fd808edbe995679c7b203fbc770b46ad30e1999596f47815e6
MD5 3f2aca339a52add3a266b1410bb7f45c
BLAKE2b-256 37d0a7599bd444d839746ea10d09e67ce696735eb72c0003490b3e2e5fb0fba3

See more details on using hashes here.

Provenance

The following attestation bundles were made for engram_search-0.1.2.tar.gz:

Publisher: publish.yml on Nitin-Gupta1109/engram

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file engram_search-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: engram_search-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 27.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engram_search-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f50014b3900247bf840f9f573c3acd98e01744befa32c923823246faa277ff1f
MD5 768403d3b2edcf57d89ae2e7bc2b475b
BLAKE2b-256 fde86a78fadd5e92034297612af7ef9c7e940a9021c2d65e6774ede203ffbb91

See more details on using hashes here.

Provenance

The following attestation bundles were made for engram_search-0.1.2-py3-none-any.whl:

Publisher: publish.yml on Nitin-Gupta1109/engram

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page