Open-source MCP server for mem0 - local LLMs, self-hosted, Docker-free
Project description
mem0-open-mcp
Open-source MCP server for mem0 — local LLMs, self-hosted, Docker-free.
Created because the official mem0-mcp configuration wasn't working properly for my setup.
Features
- Local LLMs: Ollama (recommended), LMStudio*, or any OpenAI-compatible API
- Self-hosted: Your data stays on your infrastructure
- Docker-free: Simple
pip install+ CLI - Flexible: YAML config with environment variable support
- Multiple Vector Stores: Qdrant, Chroma, Pinecone, and more
*LMStudio requires JSON mode compatible models
Quick Start
Installation
pip install mem0-open-mcp
Or install from source:
git clone https://github.com/wonseoko/mem0-open-mcp.git
cd mem0-open-mcp
pip install -e .
Usage
# Create default config
mem0-open-mcp init
# Interactive configuration wizard
mem0-open-mcp configure
# Test configuration (recommended for initial setup)
mem0-open-mcp test
# Start the server
mem0-open-mcp serve
# With options
mem0-open-mcp serve --port 8765 --user-id alice
The test command verifies your configuration without starting the server:
- Checks Vector Store, LLM, and Embedder connections
- Performs actual memory add/search operations
- Cleans up test data automatically
Modes
stdio Mode (for mcp-proxy or Claude Desktop)
Run the server in stdio mode when integrating with mcp-proxy or Claude Desktop:
mem0-open-mcp stdio
mem0-open-mcp stdio --config ./config.yaml
Use this mode when:
- Running via mcp-proxy
- Claude Desktop subprocess integration
- Process spawns on demand
- Performance: Optimized for v0.2.1+ with lightweight manager startup
serve Mode (HTTP/SSE server)
Run a persistent HTTP server for remote access or multiple concurrent clients:
mem0-open-mcp serve --port 8765
Use this mode when:
- Remote access needed
- Multiple concurrent clients
- Always-on server preferred
- Custom port configuration required
mcp-proxy Integration
Use mcp-proxy to route MCP protocol between tools and Claude Desktop. Configure your mcp-servers.json:
{
"mcpServers": {
"mem0": {
"command": "mem0-open-mcp",
"args": ["stdio"]
}
}
}
Or with a custom config:
{
"mcpServers": {
"mem0": {
"command": "mem0-open-mcp",
"args": ["stdio", "--config", "/path/to/config.yaml"]
}
}
}
The stdio mode communicates via stdin/stdout, making it ideal for process-spawned integrations.
Update Command
Keep mem0-open-mcp up to date with the self-update feature:
# Check for available updates
mem0-open-mcp update --check
# Force update to latest version
mem0-open-mcp update --force
# Update and exit on success
mem0-open-mcp update
Options:
--check: Only check for available updates without installing--force: Force reinstall even if already at latest version
Configuration
Create mem0-open-mcp.yaml:
server:
host: "0.0.0.0"
port: 8765
user_id: "default"
llm:
provider: "ollama"
config:
model: "llama3.2"
base_url: "http://localhost:11434"
embedder:
provider: "ollama"
config:
model: "nomic-embed-text"
base_url: "http://localhost:11434"
embedding_dims: 768
vector_store:
provider: "qdrant"
config:
collection_name: "mem0_memories"
host: "localhost"
port: 6333
embedding_model_dims: 768
With LMStudio
⚠️ Note: LMStudio requires a model that supports
response_format: json_object. mem0 uses structured JSON output for memory extraction. If you getresponse_formaterrors, use Ollama instead or select a model with JSON mode support in LMStudio.
llm:
provider: "openai"
config:
model: "your-model-name"
base_url: "http://localhost:1234/v1"
embedder:
provider: "openai"
config:
model: "your-embedding-model"
base_url: "http://localhost:1234/v1"
MCP Integration
Connect your MCP client to:
http://localhost:8765/mcp/<client-name>/sse/<user-id>
Claude Desktop
{
"mcpServers": {
"mem0": {
"url": "http://localhost:8765/mcp/claude/sse/default"
}
}
}
Available MCP Tools
| Tool | Description |
|---|---|
add_memories |
Store new memories from text |
search_memory |
Search memories by query |
list_memories |
List all user memories |
get_memory |
Get a specific memory by ID |
delete_memories |
Delete memories by IDs |
delete_all_memories |
Delete all user memories |
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/api/v1/status |
GET | Server status |
/api/v1/config |
GET/PUT | Configuration |
/api/v1/memories |
GET/POST/DELETE | Memory operations |
/api/v1/memories/search |
POST | Search memories |
Requirements
- Python 3.10+
- Vector store (Qdrant recommended)
- LLM server (Ollama, LMStudio, etc.)
Performance Optimizations
stdio Mode Optimizations (v0.2.1+)
The stdio mode is optimized for performance:
- Lightweight Manager: Reduced startup overhead compared to HTTP server
- On-Demand Spawning: Process spawns only when needed for MCP requests
- No Server Overhead: Eliminates HTTP/SSE connection management
- Ideal for Claude Desktop: Minimal resource footprint when integrated via mcp-proxy
Use stdio mode for optimal performance in Claude Desktop or mcp-proxy integrations.
Performance Tips
- Use Qdrant vector store for best performance (recommended)
- Keep embedding dimensions consistent (768 or 1536)
- For large memory operations, increase vector store batch size in configuration
- Monitor Ollama performance with local models (llama3.2 recommended for speed)
Graph Store (Experimental)
Graph store enables knowledge graph capabilities for relationship extraction between entities.
Configuration
graph_store:
provider: "neo4j"
config:
url: "bolt://localhost:7687"
username: "neo4j"
password: "your-password"
Installation
pip install mem0-open-mcp[neo4j]
# or
pip install mem0-open-mcp[kuzu]
Limitations
⚠️ Important: Graph store requires LLMs with proper tool calling support.
- OpenAI models: Full support (recommended for graph store)
- Ollama models: Limited support - most models (llama3.2, llama3.1) do not follow tool schemas accurately, resulting in empty graph relations
If you need graph capabilities with local LLMs, consider using the
graph_store.llmsetting to specify a different LLM provider for graph operations only.
# Example: Use OpenAI for graph, Ollama for everything else
llm:
provider: "ollama"
config:
model: "llama3.2"
graph_store:
provider: "neo4j"
config:
url: "bolt://localhost:7687"
username: "neo4j"
password: "password"
llm:
provider: "openai"
config:
model: "gpt-4o-mini"
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mem0_open_mcp-0.2.9.tar.gz.
File metadata
- Download URL: mem0_open_mcp-0.2.9.tar.gz
- Upload date:
- Size: 290.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d49dc03776586c16741e1f532951eefb8080ed1125bb8894d0d3dbf8c38b962a
|
|
| MD5 |
9d6c8ee9675d0a03f97007f50d215b7f
|
|
| BLAKE2b-256 |
40c57daf5036e686bcf0869a78a0ae3fee9a9ae44ca6741c1b5f01d57ece8bae
|
Provenance
The following attestation bundles were made for mem0_open_mcp-0.2.9.tar.gz:
Publisher:
publish.yml on wonseoko/mem0-open-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem0_open_mcp-0.2.9.tar.gz -
Subject digest:
d49dc03776586c16741e1f532951eefb8080ed1125bb8894d0d3dbf8c38b962a - Sigstore transparency entry: 927318321
- Sigstore integration time:
-
Permalink:
wonseoko/mem0-open-mcp@935c9c8f3e2f5ee5e93cf0d00bd834823aa21bda -
Branch / Tag:
refs/tags/v0.2.9 - Owner: https://github.com/wonseoko
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@935c9c8f3e2f5ee5e93cf0d00bd834823aa21bda -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem0_open_mcp-0.2.9-py3-none-any.whl.
File metadata
- Download URL: mem0_open_mcp-0.2.9-py3-none-any.whl
- Upload date:
- Size: 38.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
87c502245ddeb234d706561a2948d1aa665c05eed232b8b769dfc267e3637da6
|
|
| MD5 |
a5ec611bee8bc548479c7923ce87a6e0
|
|
| BLAKE2b-256 |
ff66f8a08117bd63cf3268aa8c416eb2c206d1c02b8de8b0a48a423c8190d8e4
|
Provenance
The following attestation bundles were made for mem0_open_mcp-0.2.9-py3-none-any.whl:
Publisher:
publish.yml on wonseoko/mem0-open-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem0_open_mcp-0.2.9-py3-none-any.whl -
Subject digest:
87c502245ddeb234d706561a2948d1aa665c05eed232b8b769dfc267e3637da6 - Sigstore transparency entry: 927318323
- Sigstore integration time:
-
Permalink:
wonseoko/mem0-open-mcp@935c9c8f3e2f5ee5e93cf0d00bd834823aa21bda -
Branch / Tag:
refs/tags/v0.2.9 - Owner: https://github.com/wonseoko
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@935c9c8f3e2f5ee5e93cf0d00bd834823aa21bda -
Trigger Event:
push
-
Statement type: