Local MCP hub for personal data — private, cross-platform, agent-ready.
Project description
MemoryMesh
Universal MCP hub for personal data. Local-first, private by default, designed to be the memory layer of the agents you'll build next.
MemoryMesh indexes your local files — and in future versions, your emails, calendar, browser history, and chat logs — and exposes them through the Model Context Protocol. Any MCP-aware client (Claude Desktop, Cursor, Claude Code, or your own agent) can ask semantic questions over the things you actually own, without sending a single byte to the cloud.
It is a hub, not a single-purpose RAG. The transport, embedding model, parser, and chunking strategy are all swappable behind clean interfaces — so the same hub can grow from "search my notes" to "remember everything for my Agent OS."
Why this exists
Personal data is fragmented across dozens of apps. No AI agent can reach all of it in a unified, private way. Anthropic's MCP defined the protocol; MemoryMesh fills the gap of the hub that wires everything up — locally, with privacy as a precondition rather than a setting.
How it works
┌──────────────────────────────┐
MCP clients ───▶ │ MemoryMesh │
(Claude Desktop, │ ┌────────────────────────┐ │
Cursor, agents) │ │ MCP Tools (FastMCP): │ │
│ │ search_memory │ │
│ │ list_sources │ │
│ │ get_document │ │
│ │ index_now │ │
│ └──────────┬─────────────┘ │
│ ▼ │
│ Search Engine │
│ dense + BM25 → RRF │
│ │ │
│ ┌─────────┴──────────┐ │
│ ▼ ▼ │
│ ChromaDB BM25 │
│ (embeddings) (sparse) │
│ ▲ ▲ │
│ └──────── Indexer ───┘ │
│ ▲ │
│ Watchdog │
└────────────────┬──────────────┘
▼
Your filesystem
Indexing pipeline: file watcher detects changes → SHA-256 dedup skips unchanged files → parser (txt/md/pdf/docx/code) → smart chunker (tree-sitter for code, by-heading for markdown, recursive for text) → embeddings via sentence-transformers → upsert into ChromaDB + BM25 index.
Search pipeline: query → dense search (ChromaDB) + sparse search (BM25) over-fetch → Reciprocal Rank Fusion (k=60) → top-k results with path, preview, score, and metadata.
What makes it different
Most comparable tools pick one dimension to optimize. MemoryMesh is the only one that hits all of them simultaneously:
| Feature | MemoryMesh | LangChain | LlamaIndex | PrivateGPT | AnythingLLM | MemGPT | Haystack |
|---|---|---|---|---|---|---|---|
| MCP native | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Hybrid search (dense + BM25 + RRF) | ✅ | Partial | Partial | ❌ | ❌ | ❌ | ✅ |
| Real-time watcher + SHA-256 dedup | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Post-crash reconciliation | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| 100% local, zero telemetry | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Cross-platform (Win/Linux/Mac) | ✅ | ✅ | ✅ | Partial | Partial | ✅ | ✅ |
| No framework dependency | ✅ | — | — | ❌ | ❌ | ❌ | — |
| Designed as infrastructure | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
MCP native means it was built for MCP from day one — not bolted on after. The 4 tools (search_memory, list_sources, get_document, index_now) have a stable API that will not break across versions.
Designed as infrastructure means the architecture anticipates multi-agent access, per-agent permissions, and hardware agents (ESP32, Arduino) querying the same hub. See Roadmap.
Status
| Feature | Status |
|---|---|
| Local file indexing (txt, md, code, pdf, docx) | ✅ |
| Hybrid search — dense + BM25 + RRF | ✅ |
| MCP server — 4 tools, stdio + streamable-http | ✅ |
| Real-time incremental indexing (watchdog + debounce) | ✅ |
| Tree-sitter code chunking (Python, JS, TS, Go, Rust…) | ✅ |
| Cross-platform — Windows / Linux / macOS | ✅ |
| Post-crash reconciliation | ✅ |
| Optional OCR for scanned PDFs (Tesseract / EasyOCR) | ✅ |
| Privacy audit log (query hashes only, no cleartext) | ✅ |
| 172 tests — unit + integration | ✅ |
Parent Document Retriever (extended_preview) |
🔜 v0.2 |
| GitHub Actions CI (Ubuntu / Windows / macOS) | 🔜 v0.2 |
| Docker + docker-compose | 🔜 v0.2 |
| Cross-encoder reranker | 🔜 v0.3 |
| Evaluation framework (Precision@k, MRR, NDCG) | 🔜 v0.3 |
| RAG with local LLM (Ollama) | 🔜 v0.4 |
| Email / Calendar / Browser sources | 🔜 v0.4 |
| Per-agent permission layer | 🔜 v0.5 |
Quickstart
Prerequisite: Python 3.11+ and
uv.
# Install from PyPI
pip install memorymesh-mcp
Or clone for development:
# Clone and install
git clone https://github.com/kilhubprojects/memory-mesh.git
cd memory-mesh
uv sync
# Initialize state directory and copy example config
uv run memorymesh init
# Edit config.yaml — point it at the folders you want indexed
# (see Configuration section below)
# Index a folder
uv run memorymesh index ~/Documents
# Test a search
uv run memorymesh search "how did I configure the debounce"
Run as daemon (real-time indexing)
uv run memorymesh start --transport streamable-http --detach
uv run memorymesh status
# edit a file in one of your sources — it gets indexed within ~2s
uv run memorymesh search "the sentence you just typed"
uv run memorymesh stop
Wire it into Claude Desktop
Add to your Claude Desktop config:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"memorymesh": {
"command": "uv",
"args": [
"run",
"--directory", "/absolute/path/to/memory-mesh",
"memorymesh", "serve", "--stdio"
]
}
}
}
Restart Claude Desktop. The four tools appear automatically.
MCP Tools
| Tool | Description |
|---|---|
search_memory(query, top_k, mode, source) |
Hybrid search over all indexed content. Returns path, preview, score, file type, and source. |
list_sources() |
List all configured sources with file counts and index status. |
get_document(path, max_bytes) |
Read the full content of an indexed file (up to 1 MB by default). |
index_now(path) |
Force immediate re-index of a file or directory, bypassing the watcher. |
All tools are backwards-compatible. The v0.1 signatures are frozen — adding extended_preview in v0.2 is additive, not breaking.
Configuration
Everything lives in config.yaml. See config.example.yaml for a fully commented reference. Key highlights:
sources:
- name: documents
path: ~/Documents
recursive: true
extensions: [.txt, .md, .pdf, .docx]
- name: projects
path: ~/Projects
recursive: true
extensions: [.py, .js, .ts, .go, .rs, .md]
embeddings:
model: all-MiniLM-L6-v2 # swap to paraphrase-multilingual-MiniLM-L12-v2 for PT/EN
search:
mode: hybrid # hybrid | dense | sparse
top_k: 10
server:
transport: stdio # stdio | streamable-http
Global ignore list protects sensitive paths by default: .env, *.key, id_rsa*, secrets/, .ssh/, .aws/, .git/, node_modules/.
Benchmarks
Benchmarks will be published here after v0.2 lands CI across all three platforms. The goal is reproducible numbers — not "fast on my machine."
Scripts are already in benchmarks/ and runnable locally:
bench_indexing.py— indexing throughput (chunks/s, MB/s) on a synthetic corpusbench_search_latency.py— p50/p95/p99 search latency across hybrid/dense/sparse modesbench_embedding_models.py— speed vs. quality comparison across three embedding models
Privacy & security
Three hard commitments that do not change across versions:
- No data leaves your machine. No telemetry. No external API calls unless you explicitly opt in — and even then, there is a
WARNINGin the log. - HTTP listener binds to
127.0.0.1only by default. Exposing to other interfaces requires an explicit config override. - Logs never contain document content or queries in cleartext. The audit log records query hashes, not queries.
Encryption at rest is on the roadmap. If your disk is encrypted at the OS level, you are covered for the threat model MemoryMesh is designed against.
Roadmap
| Version | Focus | ETA |
|---|---|---|
| v0.2 | Security hardening + CI/CD + Parent Document Retriever | soon |
| v0.3 | Eval framework (Precision@k, MRR) + reranker + query expansion | — |
| v0.4 | Local LLM via Ollama (full RAG) + email/calendar sources | — |
| v0.5 | Per-agent permissions + hierarchical memory (hot/warm/cold) | — |
| v1.0 | Agent OS integration — memory layer for multi-agent systems | ~6 months |
| v2.0 | Hardware agents — ESP32/Arduino querying the hub over BLE/WiFi | ~12 months |
Full details in ROADMAP.md.
Troubleshooting
UnicodeDecodeErroron a text file — MemoryMesh tries UTF-8, UTF-8 BOM, cp1252, latin-1 in order. If a file still fails, it is logged and skipped, not crashed.- Watcher doesn't fire on a network drive / WSL mount — set
watcher.use_polling: trueinconfig.yaml. - Tesseract not found — install it system-wide and ensure it is in
PATH. Windows: UB-Mannheim installer. - Embedding model mismatch after changing config — run
memorymesh reindex --all. The CLI refuses to start if the model ID stored in ChromaDB does not match the config.
About this project
MemoryMesh is a solo project by Carlos, a high school student (3rd year, STEM) from Brazil, aiming for mechanical engineering at MIT.
It was built using vibe coding — writing code in tight collaboration with LLMs at high speed — with structured architectural reviews at each phase. The process: LLM proposes code, architect reviews for correctness, design gaps, and spec violations, test suite confirms. Bugs that slipped through (startup order in the reconciliation system, BM25 encapsulation violation, wrong constructor kwargs in the CLI) were caught in review before they ever ran in production.
This is what vibe coding looks like when you take the review step seriously: a 172-test suite, a real hybrid search pipeline, a reconciliation system, and an architecture designed to carry forward into an Agent OS — built by one person, in high school, in a few weeks.
Other projects by Carlos: a J.A.R.V.I.S.-style voice assistant, a robot with hybrid AI (PC + Arduino + micro:bit via Bluetooth), and a trading simulator with RandomForest + PyQt5.
Contributing
MemoryMesh is not yet accepting external contributions — there is no CI or contribution guide in place yet. This changes in v0.2. Watch the repo or check back then.
License
MIT. See LICENSE.
Acknowledgements
Architecture informed by studying LlamaIndex, LangChain, PrivateGPT, AnythingLLM, MemGPT, and Haystack — understanding what each does well and what it does not. And to chroma-mcp and the MCP Python SDK for showing what MCP-native looks like in practice.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memorymesh_mcp-0.2.0.tar.gz.
File metadata
- Download URL: memorymesh_mcp-0.2.0.tar.gz
- Upload date:
- Size: 53.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4ca5f9a847381bf54e2a7f57f75e07f79b287c3c23f9d920dbfd96ef08013fb6
|
|
| MD5 |
d70f2abb9fdf13a9b0783739022b5c66
|
|
| BLAKE2b-256 |
d226b4b8b98612b450abfeddc5992fa1c6875ea5ee8a2cf5d8d386c2da05de49
|
Provenance
The following attestation bundles were made for memorymesh_mcp-0.2.0.tar.gz:
Publisher:
release.yml on kilhubprojects/memory-mesh
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorymesh_mcp-0.2.0.tar.gz -
Subject digest:
4ca5f9a847381bf54e2a7f57f75e07f79b287c3c23f9d920dbfd96ef08013fb6 - Sigstore transparency entry: 1435313445
- Sigstore integration time:
-
Permalink:
kilhubprojects/memory-mesh@76bcd1fc449717e4ca843bafccf1572bf87375b7 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/kilhubprojects
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@76bcd1fc449717e4ca843bafccf1572bf87375b7 -
Trigger Event:
push
-
Statement type:
File details
Details for the file memorymesh_mcp-0.2.0-py3-none-any.whl.
File metadata
- Download URL: memorymesh_mcp-0.2.0-py3-none-any.whl
- Upload date:
- Size: 73.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a707d940a541ce224e90dc28c30affd88d35e59f757c4e2273307f64c9cf330e
|
|
| MD5 |
9a67567e7356dfa387c3e12b7fb5f34a
|
|
| BLAKE2b-256 |
c3ccb059afd49a5825136a3673ef65d66a4a6e1f23f7862c5deefd31c35e4a3e
|
Provenance
The following attestation bundles were made for memorymesh_mcp-0.2.0-py3-none-any.whl:
Publisher:
release.yml on kilhubprojects/memory-mesh
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorymesh_mcp-0.2.0-py3-none-any.whl -
Subject digest:
a707d940a541ce224e90dc28c30affd88d35e59f757c4e2273307f64c9cf330e - Sigstore transparency entry: 1435313465
- Sigstore integration time:
-
Permalink:
kilhubprojects/memory-mesh@76bcd1fc449717e4ca843bafccf1572bf87375b7 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/kilhubprojects
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@76bcd1fc449717e4ca843bafccf1572bf87375b7 -
Trigger Event:
push
-
Statement type: