Skip to main content

High-performance AI memory engine with Rust core

Project description

mem7

LLM-powered long-term memory engine — Rust core with multi-language bindings.

mem7 extracts factual statements from conversations, deduplicates them against existing memories, and stores the results in a vector database with full audit history.

Install

pip install mem7          # Python
npm install @mem7ai/mem7  # Node.js / TypeScript
cargo add mem7            # Rust

Architecture

Python / TypeScript / Rust API
    │  PyO3 (sync + async) / napi-rs / native
    ▼
Rust Core (tokio async runtime)
    ├── mem7-llm        — OpenAI-compatible LLM client
    ├── mem7-embedding  — OpenAI-compatible embedding client
    ├── mem7-vector     — Vector index (FlatIndex / Upstash)
    ├── mem7-graph      — Graph store (FlatGraph / Kuzu / Neo4j)
    ├── mem7-history    — SQLite audit trail
    ├── mem7-dedup      — LLM-driven memory deduplication
    ├── mem7-reranker   — Search reranking (Cohere / LLM-based)
    └── mem7-store      — Pipeline orchestrator (MemoryEngine)

Quick Start (Python — Sync)

from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig

config = MemoryConfig(
    llm=LlmConfig(
        base_url="http://localhost:11434/v1",
        api_key="ollama",
        model="qwen2.5:7b",
    ),
    embedding=EmbeddingConfig(
        base_url="http://localhost:11434/v1",
        api_key="ollama",
        model="mxbai-embed-large",
        dims=1024,
    ),
)

m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = m.search("What sports does Alice play?", user_id="alice")

Quick Start (Python — Async)

import asyncio
from mem7 import AsyncMemory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig

async def main():
    config = MemoryConfig(
        llm=LlmConfig(
            base_url="http://localhost:11434/v1",
            api_key="ollama",
            model="qwen2.5:7b",
        ),
        embedding=EmbeddingConfig(
            base_url="http://localhost:11434/v1",
            api_key="ollama",
            model="mxbai-embed-large",
            dims=1024,
        ),
    )

    m = await AsyncMemory.create(config=config)
    await m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
    results = await m.search("What sports does Alice play?", user_id="alice")

asyncio.run(main())

Quick Start (TypeScript)

import { MemoryEngine } from "@mem7ai/mem7";

const engine = await MemoryEngine.create(JSON.stringify({
  llm: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "qwen2.5:7b" },
  embedding: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "mxbai-embed-large", dims: 1024 },
}));

await engine.add([{ role: "user", content: "I love playing tennis and my coach is Sarah." }], "alice");
const results = await engine.search("What sports does Alice play?", "alice");

Supported Providers

mem7 uses a single OpenAI-compatible client for both LLM and Embedding, which covers any service that exposes the OpenAI API format. This includes most major providers out of the box.

LLMs

Provider mem0 mem7 Notes
OpenAI :white_check_mark: :white_check_mark: Native support
Ollama :white_check_mark: :white_check_mark: Via OpenAI-compatible API
vLLM :white_check_mark: :white_check_mark: Via OpenAI-compatible API
Groq :white_check_mark: :white_check_mark: Via OpenAI-compatible API
Together :white_check_mark: :white_check_mark: Via OpenAI-compatible API
DeepSeek :white_check_mark: :white_check_mark: Via OpenAI-compatible API
xAI (Grok) :white_check_mark: :white_check_mark: Via OpenAI-compatible API
LM Studio :white_check_mark: :white_check_mark: Via OpenAI-compatible API
Azure OpenAI :white_check_mark: :white_check_mark: Via OpenAI-compatible API
Anthropic :white_check_mark: :x: Requires native SDK
Gemini :white_check_mark: :x: Requires native SDK
Vertex AI :white_check_mark: :x: Requires native SDK
AWS Bedrock :white_check_mark: :x: Requires native SDK
LiteLLM :white_check_mark: :x: Python proxy
Sarvam :white_check_mark: :x: Requires native SDK
LangChain :white_check_mark: :x: Python framework

Embeddings

Provider mem0 mem7 Notes
OpenAI :white_check_mark: :white_check_mark: Native support
Ollama :white_check_mark: :white_check_mark: Via OpenAI-compatible API
Together :white_check_mark: :white_check_mark: Via OpenAI-compatible API
LM Studio :white_check_mark: :white_check_mark: Via OpenAI-compatible API
Azure OpenAI :white_check_mark: :white_check_mark: Via OpenAI-compatible API
Hugging Face :white_check_mark: :x: Requires native SDK
Gemini :white_check_mark: :x: Requires native SDK
Vertex AI :white_check_mark: :x: Requires native SDK
AWS Bedrock :white_check_mark: :x: Requires native SDK
FastEmbed :white_check_mark: :x: Python-only (ONNX)
LangChain :white_check_mark: :x: Python framework

Vector Stores

Provider mem0 mem7 Notes
In-memory (FlatIndex) :white_check_mark: Built-in, good for dev
Upstash Vector :white_check_mark: :white_check_mark: REST API, serverless
Qdrant :white_check_mark: :x:
Chroma :white_check_mark: :x:
pgvector :white_check_mark: :x:
Milvus :white_check_mark: :x:
Pinecone :white_check_mark: :x:
Redis :white_check_mark: :x:
Weaviate :white_check_mark: :x:
Elasticsearch :white_check_mark: :x:
OpenSearch :white_check_mark: :x:
FAISS :white_check_mark: :x:
MongoDB :white_check_mark: :x:
Supabase :white_check_mark: :x:
Azure AI Search :white_check_mark: :x:
Vertex AI Vector Search :white_check_mark: :x:
Databricks :white_check_mark: :x:
Cassandra :white_check_mark: :x:
S3 Vectors :white_check_mark: :x:
Baidu :white_check_mark: :x:
Neptune :white_check_mark: :x:
Valkey :white_check_mark: :x:
LangChain :white_check_mark: :x:

Rerankers

Provider mem0 mem7 Notes
Cohere :white_check_mark: :white_check_mark: Cohere v2 rerank API
LLM-based :white_check_mark: :white_check_mark: Any OpenAI-compatible LLM
Jina AI :white_check_mark: :x: Planned
Cross-encoder :white_check_mark: :x: Planned

Graph Stores

Provider mem0 mem7 Notes
In-memory (FlatGraph) :white_check_mark: Built-in, good for dev/testing
Kuzu (embedded) :white_check_mark: :white_check_mark: Cypher-based, no server needed (feature flag kuzu)
Neo4j :white_check_mark: :white_check_mark: Production-grade, Bolt protocol
Memgraph :white_check_mark: :x: Planned
Amazon Neptune :white_check_mark: :x: Planned

Language Bindings

Language Status
Python (sync + async) :white_check_mark: PyPI: pip install mem7
TypeScript / Node.js :white_check_mark: npm: npm install @mem7ai/mem7
Rust :white_check_mark: crates.io: cargo add mem7
Go Planned

Vector Store Backends

Built-in FlatIndex (default) — in-memory brute-force, good for development:

from mem7.config import VectorConfig

VectorConfig(provider="flat", dims=1024)

Upstash Vector — managed cloud vector database:

VectorConfig(
    provider="upstash",
    collection_name="my-namespace",
    dims=1024,
    upstash_url="https://your-index.upstash.io",
    upstash_token="your-token",
)

Graph Memory (Dual-Path Recall)

When graph is configured, mem7 runs dual-path recall: vector search and graph search execute concurrently via tokio::join!, returning both factual memories and entity relations.

On add(), the engine extracts entities and relations from conversations using LLM (JSON mode) and stores them in the graph alongside the vector memories.

FlatGraph (in-memory, for development):

from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig, GraphConfig

config = MemoryConfig(
    llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:7b"),
    embedding=EmbeddingConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="mxbai-embed-large", dims=1024),
    graph=GraphConfig(provider="flat"),
)

m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")

results = m.search("What sports does Alice play?", user_id="alice")
# results["memories"]   -> vector search results
# results["relations"]  -> graph relations (e.g. USER -[loves_playing]-> tennis)

Neo4j (production):

GraphConfig(
    provider="neo4j",
    neo4j_url="bolt://localhost:7687",
    neo4j_username="neo4j",
    neo4j_password="password",
)

Kuzu (embedded, requires kuzu feature flag):

GraphConfig(provider="kuzu", kuzu_db_path="./my_graph.kuzu")

The graph LLM can be configured separately (e.g. use a cheaper model for extraction):

GraphConfig(
    provider="flat",
    llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:3b"),
)

Examples

See the examples/ directory:

Development

Prerequisites

  • Rust 1.85+ (stable)
  • Python 3.10+
  • Node.js 22+
  • maturin

Build

python -m venv .venv && source .venv/bin/activate
pip install maturin pydantic

# Development build (debug, fast iteration)
maturin develop

# Release build
maturin develop --release

Test

# Rust tests
cargo test --workspace

# Clippy
cargo clippy --workspace --all-targets -- -D warnings

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem7-0.2.0.tar.gz (65.8 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

mem7-0.2.0-cp310-abi3-win_amd64.whl (4.1 MB view details)

Uploaded CPython 3.10+Windows x86-64

mem7-0.2.0-cp310-abi3-manylinux_2_24_aarch64.whl (4.6 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.24+ ARM64

mem7-0.2.0-cp310-abi3-macosx_11_0_arm64.whl (4.3 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

mem7-0.2.0-cp310-abi3-macosx_10_12_x86_64.whl (4.4 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

mem7-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

File details

Details for the file mem7-0.2.0.tar.gz.

File metadata

  • Download URL: mem7-0.2.0.tar.gz
  • Upload date:
  • Size: 65.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem7-0.2.0.tar.gz
Algorithm Hash digest
SHA256 0380aad9d2adf9cc372d55512040c22d4fe36dde2e3cac17c428b9d4a3ac44ce
MD5 934974fede11474e3b1ad455edec6151
BLAKE2b-256 0c23fc9d836f2e3d41639d7a2ae5f5288e597082ff8e459b62f9f0a6f5e42582

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.2.0.tar.gz:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.2.0-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: mem7-0.2.0-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 4.1 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem7-0.2.0-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 fb56c98104b6af748ba0e743bd80ecf7153fdff43eb0517bb93892cc44cf3d89
MD5 57cba561e75224083a7618404e9f9291
BLAKE2b-256 f40907487c4aacd0028ff0794a9e70bacce099be25d5f310da55ae77bf989fde

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.2.0-cp310-abi3-win_amd64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.2.0-cp310-abi3-manylinux_2_24_aarch64.whl.

File metadata

File hashes

Hashes for mem7-0.2.0-cp310-abi3-manylinux_2_24_aarch64.whl
Algorithm Hash digest
SHA256 31e9c954296a1db620384dcb824a90bdfdcd6dddb1e8910ee34c00cfaba33198
MD5 7a72b781638f35a23aceb7e965e3f2fe
BLAKE2b-256 90153360c354f5817ff71d2ddba35cfc3cbe96d598007018a153a11bf9d09118

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.2.0-cp310-abi3-manylinux_2_24_aarch64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.2.0-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

  • Download URL: mem7-0.2.0-cp310-abi3-macosx_11_0_arm64.whl
  • Upload date:
  • Size: 4.3 MB
  • Tags: CPython 3.10+, macOS 11.0+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem7-0.2.0-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f6c33abdc37b9c1971107d1cb2b317d71952dd6c3730ea2714ac73d46aed08c8
MD5 39f39ef6c34040924e46cf7c10edd5f2
BLAKE2b-256 02dedd26b97a772308143df42293ab4fcc04f4e0ca1ce3909bcde94270a62093

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.2.0-cp310-abi3-macosx_11_0_arm64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.2.0-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for mem7-0.2.0-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 fda79887cba0bdccd2ad3ce4c871ec7e9d5f162a86cddd46d60245a004d61072
MD5 3d04c9bcdd922f3bee308dc50facaca9
BLAKE2b-256 20be23e740633bae368939dc584602d3711ced4295728d89662e4842a4d72393

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.2.0-cp310-abi3-macosx_10_12_x86_64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for mem7-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4b1a93fcdb7d7194d81fa70709b92fa0df656a4bf44f7d2ce1ad0afcd94b8834
MD5 7184e6b4b3c46e676a683364cc4b9e3b
BLAKE2b-256 93eb78e75352ecb0c7cb4eb60942914444e36b2585cd9864355c71b815ca6cb5

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page