The Autonomous Nervous System for AI Agents — local-first MCP middleware with temporal graph memory, active coercion, and proactive goal alignment.
Project description
Memento is a revolutionary, open-source middleware that acts as the "Autonomous Nervous System" for your AI agents (like Cursor, Claude Desktop, or Trae).
While most agentic memory systems rely on expensive, cloud-hosted graph databases, Memento empowers your local agents with a powerful, zero-cost, PageRank-optimized SQLite temporal graph and Reciprocal Rank Fusion (RRF) for perfect semantic retrieval.
But Memento goes Beyond Memory. It transforms your AI from a reactive assistant into a proactive, context-aware, and strictly aligned pair-programmer.
🌟 Enterprise-Grade Architecture
1. 🧠 Hybrid Search Engine (RRF)
A zero-cost, local-first temporal graph memory provider optimized for AI.
- Built on SQLite FTS5 (Full-Text Search) and Cosine Similarity (Vector Embeddings).
- Fuses exact keyword matches and semantic meaning using Reciprocal Rank Fusion (RRF).
- Write-Ahead Logging (WAL) enabled for extreme concurrency without database locking.
- Completely private and runs locally.
2. 🛡️ Active Coercion (The Code Immune System)
A deterministic, regex/AST-based engine that physically prevents the AI (or you) from introducing known anti-patterns.
- Pre-commit Hook Integration: Automatically blocks
git commitif the code violates architectural rules. - IDE Runtime Notifications: Sends push notifications directly to the AI if it generates bad code.
- 100% Deterministic: Zero LLM hallucinations during enforcement. Bypassable via
// memento-overridetokens.
3. 🎯 Tri-State Goal Enforcer
Keep your AI strictly aligned with your project's core objectives:
- Level 1 (Context Injection): Seamlessly injects active goals into the AI's context on every memory retrieval.
- Level 2 (Strict Mentor Checkpoint): Forces the AI to submit code or plans for a strict evaluation against the project's core goals.
- Level 3 (Proactive Autonomy): The AI is instructed via MCP to autonomously query Memento before writing any code.
Goal Enforcer MCP Tools:
| Tool | Description |
|---|---|
memento_set_goals |
Set active goals (replace or append mode) |
memento_list_goals |
List goals with optional context and active-only filters |
memento_check_goal_alignment |
L2 gate — submit code/plans for strict goal evaluation |
memento_configure_enforcement |
Toggle L1/L2/L3 enforcement levels |
4. 🕸️ Dynamic Workspace Router
Zero-config multi-tenant isolation. Memento automatically detects which project repository the AI is currently working on and routes the memory/database to the correct .memento/ folder. No more context bleeding between your Frontend and Backend projects.
🧬 The Seven Superpowers
Memento's cognitive layer goes beyond passive memory. Seven autonomous superpowers transform it into a self-improving, proactive system:
SP1: Auto-Consolidation
Automatically detects semantically similar memories and merges them into enriched, deduplicated entries. Uses cosine similarity clustering with sentence-level text fusion.
memento_consolidate_memories— run a full consolidation cyclememento_toggle_consolidation_scheduler— start/stop background scheduler
SP2: KG Auto-Extraction
Automatically extracts entities and relationships from memories and populates a temporal knowledge graph using LLM analysis.
memento_extract_kg— extract entities and triples from unprocessed memoriesmemento_toggle_kg_extraction_scheduler— start/stop background scheduler
SP3: Relevance Tracking
Tracks memory access patterns with hit counting, temporal boosting, and exponential time decay. Frequently accessed recent memories rank higher.
memento_get_relevance_stats— hot/cold distribution, hit counts, decay metricsmemento_record_memory_hit— manually boost specific memories
SP4: Predictive Cache
Pre-warms an in-memory cache of related memories before starting work. Proactive context injection that anticipates what the AI will need.
memento_warm_predictive_cache— warm cache with context textmemento_get_predictive_cache_stats— hit rate, cache size, TTL info
SP5: Self-Evaluation Loop
Computes memory health scores (0-100) based on freshness, coverage, redundancy, and size. Identifies stale and orphan memories for cleanup.
memento_get_quality_report— full quality report with health scorememento_record_quality_evaluation— rate memory quality (0-1)memento_system_health— comprehensive system health dashboardmemento_kg_health— knowledge graph entity/triple metrics
SP6: Cross-Workspace Sharing
Share memories between different Memento workspaces. Enables cross-project context flow with directional sync tracking.
memento_share_memory_to_workspace— share a memory to another projectmemento_get_cross_workspace_stats— sync statistics
SP7: Real-Time Notifications
Proactive alerts about relevant context changes, memory events, and high-relevance discoveries. Configurable topics and confidence thresholds.
memento_configure_notifications— enable/disable, set topics and confidencememento_get_pending_notifications— retrieve pending alertsmemento_dismiss_notification— dismiss an alert
🚀 Quick Start
Choose your preferred installation method:
Option A: pip install (Recommended)
pip install memento-mcp
That's it. The memento-mcp and memento commands are now available globally.
Option B: uvx (Zero-install)
Run Memento instantly without installing anything permanently:
uvx memento-mcp
Option C: pip install from GitHub (Latest dev)
pip install git+https://github.com/JoyciAkira/memento.git
Option D: Clone for development
git clone https://github.com/JoyciAkira/memento.git
cd memento
uv sync
Verify any installation:
python -c "import memento; print(memento.__version__)"
memento-mcp --help
memento --help
📝 Running without OpenAI (offline / testing)
Set MEMENTO_EMBEDDING_BACKEND=none to disable embeddings entirely. Memento falls back to FTS5-only full-text search — no API key needed.
MEMENTO_EMBEDDING_BACKEND=none memento-mcp
🛠️ MCP Configuration (Cursor / Trae / Claude)
Add Memento to your mcp.json or IDE configuration. Thanks to the Dynamic Workspace Router, you only need to configure it once globally.
Config for pip / uvx install (Recommended)
No local clone needed. Just use the installed command directly:
{
"mcpServers": {
"memento": {
"command": "memento-mcp",
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"OPENAI_BASE_URL": "https://api.openai.com/v1",
"MEM0_MODEL": "openai/gpt-4o-mini",
"MEM0_EMBEDDING_MODEL": "text-embedding-3-small"
}
}
}
}
Config for local clone (development)
{
"mcpServers": {
"memento": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/memento",
"run",
"memento-mcp"
],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"OPENAI_BASE_URL": "https://api.openai.com/v1",
"MEM0_MODEL": "openai/gpt-4o-mini",
"MEM0_EMBEDDING_MODEL": "text-embedding-3-small"
}
}
}
}
Environment variables
| Variable | Description |
|---|---|
OPENAI_API_KEY |
Required for embeddings and goal checks |
OPENAI_BASE_URL |
Optional OpenAI-compatible endpoint |
MEM0_MODEL |
LLM used for cognitive features |
MEM0_EMBEDDING_MODEL |
Embeddings model used by the hybrid memory provider |
MEMENTO_EMBEDDING_BACKEND |
Set to none to disable embeddings (FTS5-only fallback) |
MEMENTO_DIR |
Workspace root used for routing .memento/ state |
MEMENTO_UI |
Enable local UI (1/true) |
MEMENTO_UI_PORT |
Local UI port (default 8089) |
⌨️ CLI Usage
Memento also works directly from the terminal — no AI agent required.
# Auto-capture git context (branch, recent commits, diff stats) as a memory
memento capture --auto
# Save a free-form note
memento capture --text "Resolved the auth timeout by increasing JWT expiry to 1h"
# Combine auto context + custom note
memento capture --auto --text "Refactored the retry logic after the incident"
# Search your memories
memento search "how did I fix the promise bug"
# Show workspace status
memento status
The capture --auto command extracts the current git branch, last 5 commits, and staged/unstaged diff stats — saving a snapshot of your work context with zero friction. Useful in git hooks, CI scripts, or just as a quick terminal habit.
🧠 Using Memento (via MCP)
Memento exposes a suite of MCP tools, but the primary entrypoint is fully autonomous.
The Proactive Subconscious
The primary memento tool is configured with a Level 3 Proactive Autonomy prompt. You don't need to say "Memento, remember this". The AI is strictly instructed to query Memento before executing any task, formulating its own search queries to retrieve your architectural rules, past bugs, and context.
Managing Active Coercion
You can manage the Code Immune System directly via chat using the exposed MCP tools:
memento_toggle_active_coercionmemento_install_git_hooksmemento_add_active_coercion_rulememento_list_active_coercion_rules
⚖️ License
Memento is released under the GNU Affero General Public License v3.0 (AGPL-3.0). This ensures that Memento remains free and open-source forever. If you modify Memento and offer it as a service over a network (e.g., as a Cloud SaaS), you must release your modified source code under the same AGPL-3.0 license.
See the LICENSE file for more details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memento_mcp-0.3.0.tar.gz.
File metadata
- Download URL: memento_mcp-0.3.0.tar.gz
- Upload date:
- Size: 426.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f3a6ae4afae2ae385d1d3fd899875665c899b1d09a6a8ff0b2c43ab73dc40bfe
|
|
| MD5 |
d2d70fa7127c14914e966a38abdde044
|
|
| BLAKE2b-256 |
a5908ee3d3570095ff8aaad02633805960211246a06436adb12bc49502ea7c8f
|
File details
Details for the file memento_mcp-0.3.0-py3-none-any.whl.
File metadata
- Download URL: memento_mcp-0.3.0-py3-none-any.whl
- Upload date:
- Size: 179.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bfa501ca7ceeb5ba40f9747cc7c5fab8ffcb8dea374bb2c4200b9310631a88ec
|
|
| MD5 |
90caf69cc4a159a22e0cd96b24007e29
|
|
| BLAKE2b-256 |
2f92acb9fdc268cb3234701bddbaf72175210a2d48721ae0a6c7d31f1332b0cd
|