MCP Server for Manhattan Memory System - Give AI agents persistent memory
Project description
Manhattan MCP
Give AI Agents Persistent Memory - MCP Server for the Manhattan Memory System
Manhattan MCP is a local Model Context Protocol (MCP) server that connects AI agents (Claude Desktop, Cursor, Windsurf, etc.) to the Manhattan Memory System - a cloud-based persistent memory for AI assistants.
Features
- 🧠 Persistent Memory - Store and retrieve information across conversations
- 🔍 Semantic Search - Find relevant memories using natural language queries
- 🤖 AI-Generated Answers - Get comprehensive answers using memory context
- 👤 Multi-Agent Support - Create separate memory spaces for different use cases
- 📊 Analytics - Track memory usage and agent statistics
- 💾 Export/Import - Backup and restore memory data
Installation
pip install manhattan-mcp
Quick Start
1. Get Your API Key
Sign up at https://themanhattanproject.ai to get your API key.
2. Set Environment Variable
export MANHATTAN_API_KEY="your-api-key-here"
Or create a .env file:
MANHATTAN_API_KEY=your-api-key-here
3. Configure Your AI Client
Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"manhattan": {
"command": "manhattan-mcp",
"args": ["start"]
}
}
}
Cursor
Add to your Cursor MCP settings (.cursor/mcp.json):
{
"mcpServers": {
"manhattan": {
"command": "manhattan-mcp"
}
}
}
Windsurf
Add to your Windsurf configuration:
{
"mcpServers": {
"manhattan": {
"command": "manhattan-mcp",
"args": ["start"]
}
}
}
4. Start Using Memory!
Once configured, your AI agent will have access to 35+ memory tools:
search_memory- Search for relevant memoriesadd_memory_direct- Store new informationget_context_answer- Get AI-generated answers with memory contextsession_start/session_end- Manage conversation sessions- And many more!
Available Tools
Memory Operations
| Tool | Description |
|---|---|
search_memory |
Search memories using natural language |
add_memory_direct |
Store structured memories |
get_context_answer |
Get AI answers using memory context |
update_memory_entry |
Update existing memories |
delete_memory_entries |
Delete specific memories |
Agent Management
| Tool | Description |
|---|---|
create_agent |
Create a new memory agent |
list_agents |
List all your agents |
get_agent |
Get agent details |
update_agent |
Update agent configuration |
delete_agent |
Permanently delete an agent |
Session Management
| Tool | Description |
|---|---|
session_start |
Initialize a conversation |
session_end |
End session and sync memories |
pull_context |
Load relevant context |
push_memories |
Sync pending memories |
AI Helpers
| Tool | Description |
|---|---|
auto_remember |
Automatically extract facts from messages |
should_remember |
Check if info is worth storing |
what_do_i_know |
Summary of known user info |
Configuration Options
| Environment Variable | Description | Default |
|---|---|---|
MANHATTAN_API_KEY |
Your API key (required) | - |
MANHATTAN_API_URL |
API endpoint URL | https://themanhattanproject.ai/mcp |
MANHATTAN_AGENT_ID |
Default agent ID | Enterprise default |
MANHATTAN_TIMEOUT |
Request timeout (seconds) | 120 |
CLI Commands
# Start the MCP server (default)
manhattan-mcp start
# Show version
manhattan-mcp --version
# Show help
manhattan-mcp --help
Example Usage
Once your AI agent is connected, it can use memory like this:
Storing information:
User: My name is Sarah and I prefer Python over JavaScript.
AI: *calls add_memory_direct to store this preference*
Nice to meet you, Sarah! I've noted your preference for Python.
Retrieving context:
User: What programming language should I use for this project?
AI: *calls search_memory to find preferences*
Based on your preference for Python, I'd recommend using it for this project!
Development
# Clone the repository
git clone https://github.com/agent-architects/manhattan-mcp
cd manhattan-mcp
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
License
MIT License - see LICENSE for details.
Links
- 🌐 Website
- 📖 Documentation
- 🐛 Issues
- 💬 Discord
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file manhattan_mcp-0.1.15.tar.gz.
File metadata
- Download URL: manhattan_mcp-0.1.15.tar.gz
- Upload date:
- Size: 23.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c6f21e025c07e0e56b4804e680d7169ce929ec11e469d8a28a96d6fea6c92fca
|
|
| MD5 |
f4849b0997dbc9b3594be12fa8bc307a
|
|
| BLAKE2b-256 |
251e6b6d22c649fc36ffae754b442f2d565385c95571be33032774dc861095e4
|
File details
Details for the file manhattan_mcp-0.1.15-py3-none-any.whl.
File metadata
- Download URL: manhattan_mcp-0.1.15-py3-none-any.whl
- Upload date:
- Size: 25.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6343f3e0c4635ccc9213a5e5b9e36249a43a980ddd0dfbb1ddac99ff205586c8
|
|
| MD5 |
78ed2121c7d5b3df09c5659ef7302c98
|
|
| BLAKE2b-256 |
1e5cd625f54ce83e14396c13c5fd9f0e86068bf11dba6ccf32def7767942a709
|