MCP-compliant server for persistent hippocampus-style memory management with semantic search
Project description
๐ง Hippocampus Memory MCP Server
Persistent, Semantic Memory for Large Language Models
Features โข Installation โข Quick Start โข Documentation โข Architecture
๐ Overview
A Python-based Model Context Protocol (MCP) server that gives LLMs persistent, hippocampus-inspired memory across sessions. Store, retrieve, consolidate, and forget memories using semantic similarity search powered by vector embeddings.
Why Hippocampus? Just like the human brain's hippocampus consolidates short-term memories into long-term storage, this server intelligently manages LLM memory through biological patterns:
- ๐ Consolidation - Merge similar memories to reduce redundancy
- ๐งน Forgetting - Remove outdated information based on age/importance
- ๐ Semantic Retrieval - Find relevant memories through meaning, not keywords
โจ Features
| Feature | Description |
|---|---|
| ๐๏ธ Vector Storage | FAISS-powered semantic similarity search |
| ๐ฏ MCP Compliant | Full MCP 1.2.0 spec compliance via FastMCP |
| ๐งฌ Bio-Inspired | Hippocampus-style consolidation and forgetting |
| ๐ Security | Input validation, rate limiting, injection prevention |
| ๐ Semantic Search | Sentence transformer embeddings (CPU-optimized) |
| โพ๏ธ Unlimited Storage | No memory count limits, only per-item size limits |
| ๐ 100% Free | Local embedding model - no API costs |
๐ Quick Start
5 Core MCP Tools
memory_read # ๐ Retrieve memories by semantic similarity
memory_write # โ๏ธ Store new memories with tags & metadata
memory_consolidate # ๐ Merge similar memories
memory_forget # ๐งน Remove memories by age/importance/tags
memory_stats # ๐ Get system statistics
๐ฆ Installation
Prerequisites
- Python 3.9+
- ~200MB disk space (for embedding model)
Setup in 3 Steps
# 1. Clone the repository
git clone https://github.com/jameslovespancakes/Memory-MCP.git
cd Memory-MCP
# 2. Install dependencies
pip install -r requirements.txt
# 3. Run the server
python -m memory_mcp_server.server
Claude Desktop Integration
Add to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"memory": {
"command": "python",
"args": ["-m", "memory_mcp_server.server"],
"cwd": "/path/to/Memory-MCP"
}
}
}
๐ That's it! Claude will now have persistent memory across conversations.
๐ Documentation
Memory Operations via MCP
Once connected to Claude, use natural language:
"Remember that I prefer Python for backend development"
โ Claude calls memory_write()
"What do you know about my programming preferences?"
โ Claude calls memory_read()
"Consolidate similar memories to clean up storage"
โ Claude calls memory_consolidate()
Direct API Usage
โ๏ธ Writing Memories
from memory_mcp_server.storage import MemoryStorage
from memory_mcp_server.tools import MemoryTools
storage = MemoryStorage(storage_path="my_memory")
await storage._ensure_initialized()
tools = MemoryTools(storage)
# Store with tags and importance
await tools.memory_write(
text="User prefers dark mode UI",
tags=["preference", "ui"],
importance_score=3.0,
metadata={"category": "settings"}
)
๐ Reading Memories
# Semantic search
result = await tools.memory_read(
query_text="What are my UI preferences?",
top_k=5,
min_similarity=0.3
)
# Filter by tags and date
result = await tools.memory_read(
query_text="Python learning",
tags=["learning", "python"],
date_range_start="2024-01-01"
)
๐ Consolidating Memories
# Merge similar memories (threshold: 0.85)
result = await tools.memory_consolidate(similarity_threshold=0.85)
print(f"Merged {result['consolidated_groups']} groups")
๐งน Forgetting Memories
# Remove by age
await tools.memory_forget(max_age_days=30)
# Remove by importance
await tools.memory_forget(min_importance_score=2.0)
# Remove by tags
await tools.memory_forget(tags_to_forget=["temporary"])
Testing
Run the included test suite:
python test_memory.py
This tests all 5 operations with sample data.
๐๏ธ Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MCP Client (Claude Desktop, etc.) โ
โโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ JSON-RPC over stdio
โโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ FastMCP Server (server.py) โ
โ โโ memory_read โ
โ โโ memory_write โ
โ โโ memory_consolidate โ
โ โโ memory_forget โ
โ โโ memory_stats โ
โโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Memory Tools (tools.py) โ
โ โโ Input validation & sanitization โ
โ โโ Rate limiting (100 req/min) โ
โโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Storage Layer (storage.py) โ
โ โโ Sentence Transformers (all-MiniLM-L6-v2) โ
โ โโ FAISS Vector Index (cosine similarity) โ
โ โโ JSON persistence (memories.json) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ Memory Lifecycle
| Step | Process | Technology |
|---|---|---|
| ๐ Write | Text โ 384-dim vector embedding | Sentence Transformers (CPU) |
| ๐พ Store | Normalized vector โ FAISS index | FAISS IndexFlatIP |
| ๐ Search | Query โ embedding โ top-k similar | Cosine similarity |
| ๐ Consolidate | Group similar (>0.85) โ merge | Vector clustering |
| ๐งน Forget | Filter by age/importance/tags โ delete | Metadata filtering |
๐ Security
| Protection | Implementation |
|---|---|
| ๐ก๏ธ Injection Prevention | Regex filtering of script tags, eval(), path traversal |
| โฑ๏ธ Rate Limiting | 100 requests per 60-second window per client |
| ๐ Size Limits | 50KB text, 5KB metadata, 20 tags per memory |
| โ Input Validation | Pydantic models + custom sanitization |
| ๐ Safe Logging | stderr only (prevents JSON-RPC corruption) |
โ๏ธ Configuration
Environment Variables
MEMORY_STORAGE_PATH="memory_data" # Storage directory
EMBEDDING_MODEL="all-MiniLM-L6-v2" # Model name
RATE_LIMIT_REQUESTS=100 # Max requests
RATE_LIMIT_WINDOW=60 # Time window (seconds)
Storage Limits
- โ Unlimited total memories (no count limit)
- โ ๏ธ Per-memory limits: 50KB text, 5KB metadata, 20 tags
๐ Troubleshooting
Model won't download
First run downloads all-MiniLM-L6-v2 (~90MB). Ensure internet connection and ~/.cache/ write permissions.
PyTorch compatibility errors
pip uninstall torch transformers sentence-transformers -y
pip install torch==2.1.0 transformers==4.35.2 sentence-transformers==2.2.2
Memory errors on large operations
The model runs on CPU. Ensure 2GB+ free RAM. Reduce top_k in read operations if needed.
๐ License
MIT License - feel free to use in your projects!
๐ค Contributing
PRs welcome! Please:
- Follow MCP security guidelines
- Add tests for new features
- Update documentation
๐ Resources
Built with ๐ง for persistent LLM memory
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hippocampus_memory_mcp-1.0.0.tar.gz.
File metadata
- Download URL: hippocampus_memory_mcp-1.0.0.tar.gz
- Upload date:
- Size: 20.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f493de106f374274d512bda066046b85a4b8a5b27a75e43e17c71b85e6e2345e
|
|
| MD5 |
c3df7ff8079836f6d5949e96bd5a083f
|
|
| BLAKE2b-256 |
ee16f9a9d14c9a65f5b5213bed9c227793b278e84e954405af45c264a35a9c0e
|
File details
Details for the file hippocampus_memory_mcp-1.0.0-py3-none-any.whl.
File metadata
- Download URL: hippocampus_memory_mcp-1.0.0-py3-none-any.whl
- Upload date:
- Size: 21.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9c620bc94a5a6c039953380b92ffbc5a96258d2b65fc9e2e48bf0f4a88667a85
|
|
| MD5 |
38f7d0cfa5a29b35ad1144b6ffbd2d6f
|
|
| BLAKE2b-256 |
eb2e0695f603effa3ba81963919700be7eb1d1b8afc1c131b7fce852021972f8
|