Skip to main content

Open-source MCP server for mem0 - local LLMs, self-hosted, Docker-free

Project description

mem0-open-mcp

Open-source MCP server for mem0local LLMs, self-hosted, Docker-free.

Created because the official mem0-mcp configuration wasn't working properly for my setup.

Features

  • Local LLMs: Ollama (recommended), LMStudio*, or any OpenAI-compatible API
  • Self-hosted: Your data stays on your infrastructure
  • Docker-free: Simple pip install + CLI
  • Flexible: YAML config with environment variable support
  • Multiple Vector Stores: Qdrant, Chroma, Pinecone, and more

*LMStudio requires JSON mode compatible models

Quick Start

Installation

pip install mem0-open-mcp

Or install from source:

git clone https://github.com/wonseoko/mem0-open-mcp.git
cd mem0-open-mcp
pip install -e .

Usage

# Create default config
mem0-open-mcp init

# Interactive configuration wizard
mem0-open-mcp configure

# Test configuration (recommended for initial setup)
mem0-open-mcp test

# Start the server
mem0-open-mcp serve

# With options
mem0-open-mcp serve --port 8765 --user-id alice

The test command verifies your configuration without starting the server:

  • Checks Vector Store, LLM, and Embedder connections
  • Performs actual memory add/search operations
  • Cleans up test data automatically

Modes

stdio Mode (for mcp-proxy or Claude Desktop)

Run the server in stdio mode when integrating with mcp-proxy or Claude Desktop:

mem0-open-mcp stdio
mem0-open-mcp stdio --config ./config.yaml

Use this mode when:

  • Running via mcp-proxy
  • Claude Desktop subprocess integration
  • Process spawns on demand
  • Performance: Optimized for v0.2.1+ with lightweight manager startup

serve Mode (HTTP/SSE server)

Run a persistent HTTP server for remote access or multiple concurrent clients:

mem0-open-mcp serve --port 8765

Use this mode when:

  • Remote access needed
  • Multiple concurrent clients
  • Always-on server preferred
  • Custom port configuration required

mcp-proxy Integration

Use mcp-proxy to route MCP protocol between tools and Claude Desktop. Configure your mcp-servers.json:

{
  "mcpServers": {
    "mem0": {
      "command": "mem0-open-mcp",
      "args": ["stdio"]
    }
  }
}

Or with a custom config:

{
  "mcpServers": {
    "mem0": {
      "command": "mem0-open-mcp",
      "args": ["stdio", "--config", "/path/to/config.yaml"]
    }
  }
}

The stdio mode communicates via stdin/stdout, making it ideal for process-spawned integrations.

Update Command

Keep mem0-open-mcp up to date with the self-update feature:

# Check for available updates
mem0-open-mcp update --check

# Force update to latest version
mem0-open-mcp update --force

# Update and exit on success
mem0-open-mcp update

Options:

  • --check: Only check for available updates without installing
  • --force: Force reinstall even if already at latest version

Configuration

Create mem0-open-mcp.yaml:

server:
  host: "0.0.0.0"
  port: 8765
  user_id: "default"

llm:
  provider: "ollama"
  config:
    model: "llama3.2"
    base_url: "http://localhost:11434"

embedder:
  provider: "ollama"
  config:
    model: "nomic-embed-text"
    base_url: "http://localhost:11434"
    embedding_dims: 768

vector_store:
  provider: "qdrant"
  config:
    collection_name: "mem0_memories"
    host: "localhost"
    port: 6333
    embedding_model_dims: 768

With LMStudio

⚠️ Note: LMStudio requires a model that supports response_format: json_object. mem0 uses structured JSON output for memory extraction. If you get response_format errors, use Ollama instead or select a model with JSON mode support in LMStudio.

llm:
  provider: "openai"
  config:
    model: "your-model-name"
    base_url: "http://localhost:1234/v1"

embedder:
  provider: "openai"
  config:
    model: "your-embedding-model"
    base_url: "http://localhost:1234/v1"

MCP Integration

Connect your MCP client to:

http://localhost:8765/mcp/<client-name>/sse/<user-id>

Claude Desktop

{
  "mcpServers": {
    "mem0": {
      "url": "http://localhost:8765/mcp/claude/sse/default"
    }
  }
}

Available MCP Tools

Tool Description
add_memories Store new memories from text
search_memory Search memories by query
list_memories List all user memories
get_memory Get a specific memory by ID
delete_memories Delete memories by IDs
delete_all_memories Delete all user memories

API Endpoints

Endpoint Method Description
/health GET Health check
/api/v1/status GET Server status
/api/v1/config GET/PUT Configuration
/api/v1/memories GET/POST/DELETE Memory operations
/api/v1/memories/search POST Search memories

Requirements

  • Python 3.10+
  • Vector store (Qdrant recommended)
  • LLM server (Ollama, LMStudio, etc.)

Performance Optimizations

stdio Mode Optimizations (v0.2.1+)

The stdio mode is optimized for performance:

  • Lightweight Manager: Reduced startup overhead compared to HTTP server
  • On-Demand Spawning: Process spawns only when needed for MCP requests
  • No Server Overhead: Eliminates HTTP/SSE connection management
  • Ideal for Claude Desktop: Minimal resource footprint when integrated via mcp-proxy

Use stdio mode for optimal performance in Claude Desktop or mcp-proxy integrations.

Performance Tips

  • Use Qdrant vector store for best performance (recommended)
  • Keep embedding dimensions consistent (768 or 1536)
  • For large memory operations, increase vector store batch size in configuration
  • Monitor Ollama performance with local models (llama3.2 recommended for speed)

Graph Store (Experimental)

Graph store enables knowledge graph capabilities for relationship extraction between entities.

Configuration

graph_store:
  provider: "neo4j"
  config:
    url: "bolt://localhost:7687"
    username: "neo4j"
    password: "your-password"

Installation

pip install mem0-open-mcp[neo4j]
# or
pip install mem0-open-mcp[kuzu]

Limitations

⚠️ Important: Graph store requires LLMs with proper tool calling support.

  • OpenAI models: Full support (recommended for graph store)
  • Ollama models: Limited support - most models (llama3.2, llama3.1) do not follow tool schemas accurately, resulting in empty graph relations

If you need graph capabilities with local LLMs, consider using the graph_store.llm setting to specify a different LLM provider for graph operations only.

# Example: Use OpenAI for graph, Ollama for everything else
llm:
  provider: "ollama"
  config:
    model: "llama3.2"

graph_store:
  provider: "neo4j"
  config:
    url: "bolt://localhost:7687"
    username: "neo4j"
    password: "password"
  llm:
    provider: "openai"
    config:
      model: "gpt-4o-mini"

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem0_open_mcp-0.2.12.tar.gz (291.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem0_open_mcp-0.2.12-py3-none-any.whl (39.2 kB view details)

Uploaded Python 3

File details

Details for the file mem0_open_mcp-0.2.12.tar.gz.

File metadata

  • Download URL: mem0_open_mcp-0.2.12.tar.gz
  • Upload date:
  • Size: 291.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem0_open_mcp-0.2.12.tar.gz
Algorithm Hash digest
SHA256 c55eb45528e32b094e3961298c06ad648103136374b83af742ed0230e9e6f2a6
MD5 5ea1a1c2e8604d2a12b7445fe5f39257
BLAKE2b-256 1e8e939bd595fbc1d63d00f7c13b9387c6bd60dbf769fe00d27b9a32aeca0d62

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem0_open_mcp-0.2.12.tar.gz:

Publisher: publish.yml on wonseoko/mem0-open-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem0_open_mcp-0.2.12-py3-none-any.whl.

File metadata

  • Download URL: mem0_open_mcp-0.2.12-py3-none-any.whl
  • Upload date:
  • Size: 39.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem0_open_mcp-0.2.12-py3-none-any.whl
Algorithm Hash digest
SHA256 56996b20a66450b38c28157aa9b61289b79f51bf613eca7c66429c1fa8067c4b
MD5 1eb1829257aae760bb30463933249612
BLAKE2b-256 eafaf1cd3d735b9cd1f6c9dd3f4c3f1079f8659e3e951d18e2093601f004333e

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem0_open_mcp-0.2.12-py3-none-any.whl:

Publisher: publish.yml on wonseoko/mem0-open-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page