Moorcheh Universal Memory Layer for Agentic AI
Project description
MUMLA - Moorcheh Universal Memory Layer for Agentic AI
What Is MUMLA?
MUMLA is a universal memory layer for agentic AI. While LLMs often forget context between sessions, MUMLA gives your agents long-term memory so they can carry context forward and remember what matters across sessions.
Why MUMLA Performs
MUMLA is built for teams that want high-quality agent memory without graph-heavy complexity. It combines immediate semantic availability, low-overhead serverless operation, and strong real-world memory accuracy so you can ship production workflows with a simpler architecture.
- Zero-cost ingestion latency: No indexing wait or token usage at ingestion, so memories are available for retrieval immediately.
- Zero storage cost at idle: Serverless architecture scales to zero when not in use.
- State-of-the-art benchmark performance: Final evaluation results reached 89.8% on LongMemEval and 87.1% on LoCoMo.
🚀 MUMLA CLI
MUMLA comes with a powerful, developer-friendly Command Line Interface. You can initialize your environment, start the server, and manage your agent's memories completely from your terminal!
You need a Moorcheh API key to use MUMLA. Create one in the Moorcheh Dashboard.
MUMLA has native LLM access, so you don't need a separate external model API key for common memory workflows.
1. Install & Configure
pip install mumla
# Setup your environment (prompts for your Moorcheh API key)
mumla
2. Test Agent Memories
# Create and activate an agent session
mumla agent create customer-support
mumla agent activate customer-support
# Store memories with specific semantic types
mumla remember "The user prefers dark mode for the dashboard."
mumla remember "User's timezone is PST."
# Instantly recall relevant context
mumla recall "What mode does the user like?"
# Get grounded AI answers using built-in RAG
mumla answer "Based on the memory, what should the theme be set to?"
Supported Memory Types
instruction, fact, decision, goal, commitment, preference, relationship, context, event, learning, observation, artifact, error
Use memory types to categorize what you store so retrieval is cleaner and more controllable:
- Save with a specific type:
mumla remember "User prefers concise answers" --type preference - Filter by type when searching:
mumla recall "user communication style" --type preference
Key Features
| Capability | Commands | What it does |
|---|---|---|
| System status dashboard | mumla status |
View environment, configuration, server health, active session, and registered agents. |
| Local server + web dashboard | mumla serve, mumla ui |
Run the MUMLA API locally and open an interactive browser UI. |
| Agent lifecycle management | mumla agent ... |
Create/list agents, activate/deactivate sessions, and run agent bootstrap for an intelligence snapshot. |
| Memory capture at scale | mumla remember |
Store single memories with metadata or batch-ingest up to 100 records from JSON. |
| Advanced retrieval modes | mumla recall |
Run standard search plus temporal queries (--as-of, --changed-since, --current-only) with filters. |
| Grounded QA over memory | mumla answer |
Generate RAG answers using retrieved memory context. |
| Daily intelligence workflows | mumla daily-summary, mumla conflicts |
Generate summaries, detect contradictions, and resolve conflicts interactively. |
| Session and automation controls | mumla session ..., mumla schedule ... |
Inspect/extend sessions and enable scheduled daily summary runs. |
| Memory file pipelines | mumla memory export, mumla memory sync |
Export structured memory markdown and sync MEMORY.md into projects. |
| Configuration inspection | mumla config show |
Inspect API key status, active agent/session, server settings, and schedule time. |
| Multi-agent ecosystem integration | mumla connect ... |
Connect/remove/list integrations for Claude Code, Codex, Cursor, Windsurf, Antigravity, Gemini CLI, Cline, Continue, OpenCode, Goose, Roo, GitHub Copilot, and Augment (local or global). |
Additional setup guides are available at the Moorcheh YouTube channel.
🎯 REST API Endpoints
For programmatic access, MUMLA exposes a clean, session-based REST API.
Important: MUMLA does not have a hosted API server yet. To use these endpoints, run your own local server first:
cd mumla
# Start server
mumla serve
By default, call the endpoints on your local server (for example: "http://127.0.0.1:8000").
Agent Management
POST /api/v2/agents- Create a new agent namespaceGET /api/v2/agents- List all available agentsGET /api/v2/agents/{agent_id}- Get metadata for a specific agentDELETE /api/v2/agents/{agent_id}- Delete an agent and all its memories
Session Management
POST /api/v2/agents/{agent_id}/activate- Start a session (returns a 6-hour JWTsession_token)POST /api/v2/agents/{agent_id}/deactivate- Manually end a sessionGET /api/v2/session/current- Check the status/validity of the current sessionPOST /api/v2/session/extend- Extend the session expiration time
Memory Operations
POST /api/v2/agents/{agent_id}/remember- Store a new memory into the agent's semantic databaseGET /api/v2/agents/{agent_id}/recall- Run an exact semantic search against the agent's memoriesPOST /api/v2/agents/{agent_id}/answer- Generate a grounded RAG answer based on the agent's memories
Authentication Required:
Authorization: Bearer {moorcheh_api_key}headerX-Session-Token: {session_token}header (for Session & Memory operations)
🤖 Why Moorcheh?
Moorcheh.ai - The world's only no-indexing semantic database.
The Revolutionary Difference
Traditional Vector DBs: Minutes of indexing delay, approximate search, stateful architecture
Moorcheh: Instant availability, exact search, serverless/stateless, 80% compute savings
Real Impact
| Feature | Traditional | Moorcheh |
|---|---|---|
| Write-to-Search | Minutes | Instant |
| Accuracy | Approximate | Exact |
| Idle Costs | Always running | Zero |
| Free Tier | Limited | 100K ops/month |
📞 Support & Documentation
Have questions or feedback? We're here to help:
- Docs: https://docs.moorcheh.ai
- Discord: Join our Discord server
- Email: support@moorcheh.ai
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mumla-2.0.0.tar.gz.
File metadata
- Download URL: mumla-2.0.0.tar.gz
- Upload date:
- Size: 401.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9b8db426c381027ae6fc8d5360c4473ce0c5a55655ba79646b2f40458b31b891
|
|
| MD5 |
1634caef6305d5e6256437085b2909f6
|
|
| BLAKE2b-256 |
5fdb2c2459f332ec8bdf3fc750f80bfe9515c6c026428ca8337be941717f6e1d
|
File details
Details for the file mumla-2.0.0-py3-none-any.whl.
File metadata
- Download URL: mumla-2.0.0-py3-none-any.whl
- Upload date:
- Size: 364.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d06da6909e6eeb106b8a098a57a5c15bb096217a57a6d7993829c9c14de68005
|
|
| MD5 |
57cd50088ff061da7526993572f0ed63
|
|
| BLAKE2b-256 |
5214ea24a04ca6de0868ae591d8dd0430350836fedd16088abb7acd63a77e6a3
|