Centralized memory system for AI development tools
Project description
mem-mesh
Persistent memory for AI agents — hybrid vector + FTS5 search, pin-based session tracking, and NLI conflict detection. Zero external dependencies.
한국어 · Quick Start · MCP Setup · MCP Tools · Session & Pins · Architecture · Docker · Contributing
Why mem-mesh?
Most MCP memory servers are glorified key-value stores. mem-mesh is built for how AI agents actually work — sessions with multiple steps, decisions that need to survive reboots, and cross-machine context that has to stay coherent.
| Differentiator | What it means |
|---|---|
| Pin lifecycle | Lightweight kanban inside every session: pin_add → pin_complete → pin_promote. No other MCP memory server has this. |
| Hybrid search | sqlite-vec vector embeddings + FTS5 full-text fused with Reciprocal Rank Fusion (RRF). Korean n-gram optimized out of the box. |
| NLI conflict detection | 2-stage pipeline: vector similarity pre-filter → mDeBERTa NLI model catches contradictory memories before they're stored. |
| 4-Tier Smart Expand | session_resume(expand="smart") uses an importance × status matrix to load only what matters — ~60% token savings. |
| Zero external services | Single SQLite file. pip install mem-mesh and you're running. No Postgres, no Redis, no cloud. |
| Dual MCP transport | stdio (Cursor, Claude Desktop, Kiro) + Streamable HTTP/SSE (MCP spec 2025-03-26). |
| 25+ client auto-detection | Identifies the calling IDE/AI platform from MCP handshake or User-Agent. |
| Batch operations | Pack multiple memory ops into one round-trip: 30–50% token savings. |
Features
- Memory CRUD —
add,search,context,update,delete - Hybrid search — sentence-transformers vectors + FTS5 RRF fusion, Korean n-gram support
- Session & pins — short-lived work tracking with importance-based promotion to permanent memory
- Memory relations —
link,unlink,get_linksacross 7 relation types - Conflict detection — mDeBERTa NLI prevents storing contradictory facts
- Batch operations — 30–50% fewer tokens per multi-op workflow
- Web dashboard — FastAPI REST API + real-time UI at
localhost:8000
Quick Start
Recommended: uvx (zero Python management)
One tool to install — uv — and mem-mesh handles the rest. No virtualenv, no pyenv tweaks, no sqlite-vec compile errors. Your MCP client spawns a cached, isolated mem-mesh on-demand.
# 1. Install uv (one-time, ~15 seconds)
curl -LsSf https://astral.sh/uv/install.sh | sh
# 2. Run the interactive installer — writes MCP config for detected tools,
# offers to install hooks, warms the uv cache.
uvx --from "mem-mesh[server]" mem-mesh install
That's it. Restart Cursor / Claude Desktop / Kiro and mem-mesh MCP tools are live.
Want the web dashboard too? uvx --from "mem-mesh[server]" mem-mesh serve — open http://localhost:8000.
Alternative: pip install
If you prefer managing Python environments yourself:
pip install "mem-mesh[server]"
mem-mesh install # same interactive installer
mem-mesh serve # web server + SSE MCP at localhost:8000
Prerequisites (only if NOT using uvx)
mem-mesh loads the sqlite-vec extension at runtime, so Python's sqlite3 module must support loadable extensions.
- uvx users — uv's managed Python builds already have extension loading enabled. Nothing to do.
- Linux —
pysqlite3-binarywheel installs automatically as a fallback. - macOS — system Python and Homebrew Python both work. Only pyenv's default build is broken.
- Windows — system Python works; install
pysqlite3-binarymanually if needed.
macOS + pyenv users who hit Migration failed: no such module: vec0:
# Option A: rebuild Python against Homebrew sqlite3
brew install sqlite3
SQLITE_PREFIX="$(brew --prefix sqlite3)"
PYTHON_CONFIGURE_OPTS="--enable-loadable-sqlite-extensions" \
LDFLAGS="-L${SQLITE_PREFIX}/lib" \
CPPFLAGS="-I${SQLITE_PREFIX}/include" \
CFLAGS="-I${SQLITE_PREFIX}/include" \
pyenv install 3.13 --force
pyenv rehash
# Option B (simplest): just use uvx — it bypasses system Python entirely
Linux distro Python, Docker images, and conda Python ship with extension loading enabled — no extra steps needed.
MCP Setup
mem-mesh install writes these entries for you automatically. The snippets below are what gets written, for reference.
uvx (recommended)
Zero Python-env management. The MCP client spawns a cached mem-mesh process per call; the first run downloads it, subsequent runs are instant.
{
"mcpServers": {
"mem-mesh": {
"command": "uvx",
"args": ["--from", "mem-mesh[server]", "mem-mesh-mcp-stdio"],
"env": { "MEM_MESH_CLIENT": "cursor" }
}
}
}
Stdio (local Python)
Use your own Python install. Good if you need -e . dev installs.
{
"mcpServers": {
"mem-mesh": {
"command": "python",
"args": ["-m", "app.mcp_stdio"],
"cwd": "/absolute/path/to/mem-mesh",
"env": { "MCP_LOG_LEVEL": "INFO" }
}
}
}
SSE (shared running server)
For web clients or when multiple tools share one process. Requires mem-mesh serve running.
{
"mcpServers": {
"mem-mesh": {
"url": "http://localhost:8000/mcp/sse",
"type": "http"
}
}
}
Config file locations by tool:
| Tool | Config file |
|---|---|
| Cursor | .cursor/mcp.json |
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Kiro | ~/.kiro/settings/mcp.json |
Mode comparison
| uvx | Stdio | SSE | |
|---|---|---|---|
| Prereq | uv only |
Python env with mem-mesh[server] |
Running mem-mesh serve |
| First call | ~15s (cache warm) | Instant | Instant |
| Server to manage | None | None | Yes |
| Dashboard | Optional (uvx … serve) |
Optional | Included |
| Hooks support | Requires separate server | Yes (local mode) | Yes (api mode) |
MCP Tools (15)
| Tool | Description | Key parameters |
|---|---|---|
add |
Store a memory | content, project_id, category, tags |
search |
Hybrid vector + FTS5 search | query, project_id, category, limit, recency_weight, response_format |
context |
Retrieve memories surrounding a given memory | memory_id, depth, project_id |
update |
Edit a memory | memory_id, content, category, tags |
delete |
Remove a memory | memory_id |
stats |
Usage statistics | project_id, start_date, end_date |
link |
Create a typed relation between memories | source_id, target_id, relation_type |
unlink |
Remove a relation | source_id, target_id |
get_links |
Query relations | memory_id, relation_type, direction |
pin_add |
Add a short-lived work-tracking pin | content, project_id, importance, tags |
pin_complete |
Mark a pin done; optionally promote to permanent memory | pin_id, promote, category |
pin_promote |
Promote an already-completed pin to permanent memory | pin_id, category |
session_resume |
Restore context from the previous session | project_id, expand, limit |
session_end |
Close a session with a summary | project_id, summary, auto_complete_pins |
batch_operations |
Execute multiple ops in one call | operations (array of add/search/pin_add/pin_complete) |
search response formats: minimal | compact | standard | full
Search
mem-mesh runs two retrieval engines in parallel and merges results with Reciprocal Rank Fusion:
- Vector —
sentence-transformers/all-MiniLM-L6-v2(384-dim) by default; E5 models supported - FTS5 — SQLite full-text search with n-gram tokenization for CJK languages
- RRF fusion — balances semantic similarity and keyword precision
- Quality filters — noise removal, intent analysis, vector pre-filter overfetch to improve recall
Session & Pins
Session lifecycle
session_resume(project_id, expand="smart") → work → session_end(project_id, summary)
session_resumerestores incomplete pins and context from the previous session. Stale pins are auto-closed.expand="smart"applies an importance × status matrix that cuts token usage by ~60%.session_endrecords a summary and closes the session. If the session terminates abnormally, the nextsession_resumeautomatically recovers open pins.
Pin lifecycle
Pins are the unit of work inside a session. Track code changes, implementations, and configuration work as pins — not in permanent memory.
pin_add(content, project_id) → do the work → pin_complete(pin_id, promote=True)
# promote=True completes and promotes in one call
Status flow: open (planned, not started) → in_progress (active; default on pin_add) → completed
Multi-step work can pre-register later steps as open pins, then activate them one at a time.
Auto-stale cleanup (triggered on session_resume):
in_progresspins older than 7 days → auto-completedopenpins older than 30 days → auto-completed
When to pin: only when files change. Questions, explanations, and read-only lookups do not need pins. Multi-step tasks get one pin per step.
Importance levels:
| Level | Use for |
|---|---|
5 |
Architecture decisions, core design changes |
3–4 |
Feature implementations, significant fixes |
1–2 |
Minor edits, typo fixes |
| omit | Auto-inferred from content |
Promote: pin_complete(pin_id, promote=True) completes and promotes to permanent memory in one call. To promote after the fact: pin_promote(pin_id).
Client detection: In HTTP mode, the calling client is identified from the MCP initialize handshake or User-Agent header (25+ IDE/AI platforms supported). In stdio mode, set MEM_MESH_CLIENT in the environment.
AI agent checklist
1. Session start → session_resume(project_id, expand="smart")
2. Past context → search() before coding if referencing previous decisions
3. Track work → pin_add → pin_complete (promote=True to merge into memory)
4. Permanent store → decision / bug / incident / idea / code_snippet only
5. Session end → session_end(project_id, summary, auto_complete_pins=True)
6. Never store → API keys / tokens / passwords / PII
Principle: Hooks are read-only signals. All pin creation, completion, and promotion decisions are made by the LLM with full context.
Memory Relations
Seven relation types: related | parent | child | supersedes | references | depends_on | similar
get_links direction: outgoing | incoming | both
Configuration
| Variable | Description | Default |
|---|---|---|
MEM_MESH_DATABASE_PATH |
SQLite database path | ./data/memories.db |
MEM_MESH_EMBEDDING_MODEL |
Embedding model name | all-MiniLM-L6-v2 |
MEM_MESH_EMBEDDING_DIM |
Vector dimensions | 384 |
MEM_MESH_SERVER_PORT |
Web server port | 8000 |
MEM_MESH_SEARCH_THRESHOLD |
Minimum similarity score | 0.5 |
MEM_MESH_USE_UNIFIED_SEARCH |
Enable hybrid search | true |
MEM_MESH_ENABLE_KOREAN_OPTIMIZATION |
Korean n-gram FTS | true |
MCP_LOG_LEVEL |
MCP server log level | INFO |
MCP_LOG_FILE |
MCP log output file | (none) |
See .env.example for the full list.
Web Dashboard
- Dashboard: http://localhost:8000
- API docs (Swagger): http://localhost:8000/docs
- Health check: http://localhost:8000/health
Architecture
flowchart LR
subgraph Clients
Cursor[Cursor]
Claude[Claude Desktop]
Kiro[Kiro]
Web[Web Client]
end
subgraph Transport
Stdio[Stdio MCP]
SSE[SSE / Streamable HTTP]
end
subgraph Core
MCP[mcp_common]
Storage[Storage Service]
end
subgraph Data
SQLite[(SQLite + sqlite-vec + FTS5)]
end
Cursor --> Stdio
Claude --> Stdio
Kiro --> Stdio
Web --> SSE
Stdio --> MCP
SSE --> MCP
MCP --> Storage
Storage --> SQLite
Directory structure
mem-mesh/
├── app/
│ ├── core/ # DB, embeddings, services, schemas
│ ├── mcp_common/ # Shared MCP tools, dispatcher, batch
│ ├── mcp_stdio/ # FastMCP stdio server
│ ├── mcp_stdio_pure/ # Pure MCP stdio server
│ └── web/ # FastAPI (dashboard, SSE MCP, OAuth, WebSocket)
├── static/ # Frontend (Vanilla JS, Web Components)
├── tests/ # pytest
├── scripts/ # Migration and benchmark scripts
├── docs/rules/ # AI agent rule modules
├── data/ # memories.db
└── logs/
Docker
# Build and start
make quickstart
# or step by step:
make docker-build && make docker-up
# Open http://localhost:8000
Development
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
python -m pytest tests/ -v
# Format and lint
black app/ tests/
ruff check app/ tests/
# Check embedding migration status
python scripts/migrate_embeddings.py --check-only
Documentation
- CLAUDE.md — AI tool checklist (MUST/SHOULD/MAY rules, security policy)
- AGENTS.md — Project context, Golden Rules, Context Map, session management details
AI agent rule modules (docs/rules/)
| File | Purpose |
|---|---|
| DEFAULT_PROMPT.md | Default behavior rules — copy into your project's CLAUDE.md or Cursor rules |
| all-tools-full.md | Full rules for all 15 tools |
| mem-mesh-ide-prompt.md | Compact IDE prompt (~300 tokens) |
| modules/quick-start.md | 5-minute quick start |
| modules/ | Feature modules: core, search, pins, relations, batch |
Architecture docs
- app/core/AGENTS.md — Core service internals
- app/mcp_common/AGENTS.md — MCP common layer
Contributing
- Open an issue or pull request
- Follow
blackandruffformatting - Add tests for any new behavior
See CONTRIBUTING.md for details and CHANGELOG.md for release history.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mem_mesh-1.4.0.tar.gz.
File metadata
- Download URL: mem_mesh-1.4.0.tar.gz
- Upload date:
- Size: 682.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59b2084b6637596671f82e8fc214ed80a960f0cc38ca4019a2e8723b5aa28509
|
|
| MD5 |
d95655ab3c31b738d6b6d950e2b06472
|
|
| BLAKE2b-256 |
2027dad16dfbc3b4e9a13670769e84f33278ac20406d587ee49d9e9d877befcc
|
Provenance
The following attestation bundles were made for mem_mesh-1.4.0.tar.gz:
Publisher:
release.yml on x-mesh/mem-mesh
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem_mesh-1.4.0.tar.gz -
Subject digest:
59b2084b6637596671f82e8fc214ed80a960f0cc38ca4019a2e8723b5aa28509 - Sigstore transparency entry: 1317531386
- Sigstore integration time:
-
Permalink:
x-mesh/mem-mesh@caf55b308015aa5bc97d65635504bb30e1424031 -
Branch / Tag:
refs/tags/v1.4.0 - Owner: https://github.com/x-mesh
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@caf55b308015aa5bc97d65635504bb30e1424031 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem_mesh-1.4.0-py3-none-any.whl.
File metadata
- Download URL: mem_mesh-1.4.0-py3-none-any.whl
- Upload date:
- Size: 687.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ea6f24dacafebd4bab27d820c13b291754d5897a11cb76129eda9f597e86d101
|
|
| MD5 |
2c3799009fe90b8f0c7633d8114066a9
|
|
| BLAKE2b-256 |
e70f57bd5922ebe1d638aca34f62557e2adb4409995bc3a3ab16f8b8c2d75d7b
|
Provenance
The following attestation bundles were made for mem_mesh-1.4.0-py3-none-any.whl:
Publisher:
release.yml on x-mesh/mem-mesh
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem_mesh-1.4.0-py3-none-any.whl -
Subject digest:
ea6f24dacafebd4bab27d820c13b291754d5897a11cb76129eda9f597e86d101 - Sigstore transparency entry: 1317531400
- Sigstore integration time:
-
Permalink:
x-mesh/mem-mesh@caf55b308015aa5bc97d65635504bb30e1424031 -
Branch / Tag:
refs/tags/v1.4.0 - Owner: https://github.com/x-mesh
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@caf55b308015aa5bc97d65635504bb30e1424031 -
Trigger Event:
push
-
Statement type: