Skip to main content

Multi-tenant memory service for AI assistants

Project description

Memoria

Secure · Auditable · Programmable Memory for AI Agents

MCP License PyPI

Persistent memory layer for AI agents (Kiro, Cursor, Claude Code, any MCP-compatible agent) with Git-level version control — snapshots, branches, rollback, and full audit trail.

Full documentation: https://github.com/matrixorigin/Memoria


Two Modes

Managed / Remote Self-hosted
Flag --api-url + --token --db-url
Requires Nothing — connect to existing server MatrixOne DB + embedding config
When Team / SaaS, admin gives you a URL + token Personal setup, local dev

Install

# Managed / remote mode — no extras needed
pip install memoria

# Self-hosted embedded mode — choose an embedding provider:
pip install "memoria[openai-embedding]"   # OpenAI / SiliconFlow / any OpenAI-compatible endpoint
pip install "memoria[local-embedding]"    # Local sentence-transformers (~900MB download)

# If no NVIDIA GPU available, install CPU-only PyTorch first to avoid large CUDA dependencies:
pip install torch --index-url https://download.pytorch.org/whl/cpu
pip install "memoria[local-embedding]"

Quick Start

Managed mode (no database, no embedding setup)

If your team or provider gives you a server URL and API token:

cd your-project
memoria init --api-url "https://your-server:8100" --token "sk-your-key..."

Restart your AI tool — done.

Self-hosted mode (run your own database)

# 1. Start MatrixOne
git clone https://github.com/matrixorigin/Memoria.git && cd Memoria
docker compose up -d

# 2. Configure
cd your-project
memoria init --db-url "mysql+pymysql://root:111@localhost:6001/memoria"

# With OpenAI-compatible embedding (recommended over local model)
memoria init --db-url "mysql+pymysql://root:111@localhost:6001/memoria" \
             --embedding-provider openai \
             --embedding-base-url https://api.siliconflow.cn/v1 \
             --embedding-api-key sk-... \
             --embedding-model BAAI/bge-m3 \
             --embedding-dim 1024

memoria init auto-detects Kiro / Cursor / Claude and writes MCP config + steering rules.

Verify

memoria status

Embedding Providers (self-hosted mode only)

Provider Quality Privacy Cost First-use latency
Local (default) Good ✅ Data never leaves machine Free ~900MB download on first use
OpenAI / SiliconFlow Better ⚠️ Text sent to API API key required None
Custom service Varies Depends on host Self-hosted None

Managed mode users don't need to configure embedding — the server handles it.


memory_store, memory_retrieve, memory_correct, memory_purge, memory_search, memory_profile, memory_snapshot, memory_rollback, memory_branch, memory_merge, memory_diff, and more.


License

Apache-2.0 © MatrixOrigin

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mo_memoria-0.1.23-py3-none-any.whl (335.8 kB view details)

Uploaded Python 3

File details

Details for the file mo_memoria-0.1.23-py3-none-any.whl.

File metadata

  • Download URL: mo_memoria-0.1.23-py3-none-any.whl
  • Upload date:
  • Size: 335.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for mo_memoria-0.1.23-py3-none-any.whl
Algorithm Hash digest
SHA256 ad6821c172af46043b6b3dd1619563c883794ebe1064f8f664a75c1b8e60c01c
MD5 6ea6637f51454deca9905e2ab39450e3
BLAKE2b-256 19a2c99ef68ba35b340f2227629c395780c94d6219e5a43a0b8b883d1157932d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page