Skip to main content

MIRIX Server - Multi-Agent Personal Assistant with Advanced Memory System

Project description

Mirix Logo

⚠️ This is Intuit's fork of MIRIX. The README below describes the upstream OSS project. For this fork's integration with the Enterprise Context & Memory Service (ECMS), see CLAUDE.md and docs/ARCHITECTURE.md. Key differences:

  • Screen activity capture is not used — memories come from conversations only.
  • ECMS imports MIRIX as a Python library and does not run MIRIX's FastAPI app.
  • Agent chaining is disabled; each agent runs exactly once per save.
  • A provenance sidecar (memory_source, memory_citation, source_message) is written alongside every memory (VEPAGE-760).

MIRIX - Multi-Agent Personal Assistant with an Advanced Memory System

Your personal AI that builds memory through screen observation and natural conversation

| 🌐 Website | 📚 Documentation | 📄 Paper | 💬 Discord


Key Features 🔥

  • Multi-Agent Memory System: Six specialized memory components (Core, Episodic, Semantic, Procedural, Resource, Knowledge Vault) managed by dedicated agents
  • Screen Activity Tracking: Continuous visual data capture and intelligent consolidation into structured memories
  • Privacy-First Design: All long-term data stored locally with user-controlled privacy settings
  • Advanced Search: PostgreSQL-native BM25 full-text search with vector similarity support
  • Multi-Modal Input: Text, images, voice, and screen captures processed seamlessly

Quick Start

Step 1: Backend & Dashboard (Docker):

docker compose up -d --pull always

Step 2: Create an API key in the dashboard (http://localhost:5173) and set as the environmental variable MIRIX_API_KEY.

Step 3: Client (Python, mirix-client, https://pypi.org/project/mirix-client/):

pip install mirix-client

Now you are ready to go! See the example below:

from mirix import MirixClient

client = MirixClient(
    api_key="your-api-key",
    base_url="http://localhost:8531",
)

client.initialize_meta_agent(
    config={
        "llm_config": {
            "model": "gemini-2.0-flash",
            "model_endpoint_type": "google_ai",
            "api_key": "your-api-key-here",
            "model_endpoint": "https://generativelanguage.googleapis.com",
            "context_window": 1_000_000,
        },
        "embedding_config": {
            "embedding_model": "text-embedding-004",
            "embedding_endpoint_type": "google_ai",
            "api_key": "your-api-key-here",
            "embedding_endpoint": "https://generativelanguage.googleapis.com",
            "embedding_dim": 768,
        },
        "meta_agent_config": {
            "agents": [
                {
                    "core_memory_agent": {
                        "blocks": [
                            {"label": "human", "value": ""},
                            {"label": "persona", "value": "I am a helpful assistant."},
                        ]
                    }
                },
                "resource_memory_agent",
                "semantic_memory_agent",
                "episodic_memory_agent",
                "procedural_memory_agent",
                "knowledge_vault_memory_agent",
            ],
        },
    }
)

client.add(
    user_id="demo-user",
    messages=[
        {"role": "user", "content": [{"type": "text", "text": "The moon now has a president."}]},
        {"role": "assistant", "content": [{"type": "text", "text": "Noted."}]},
    ],
)

memories = client.retrieve_with_conversation(
    user_id="demo-user",
    messages=[
        {"role": "user", "content": [{"type": "text", "text": "What did we discuss on MirixDB in last 4 days?"}]},
    ],
    limit=5,
)
print(memories)

For more API examples, see samples/run_client.py.

License

Mirix is released under the Apache License 2.0. See the LICENSE file for more details.

Contact

For questions, suggestions, or issues, please open an issue on the GitHub repository or contact us at founders@mirix.io

Join Our Community

Connect with other Mirix users, share your thoughts, and get support:

💬 Discord Community

Join our Discord server for real-time discussions, support, and community updates: https://discord.gg/S6CeHNrJ

🎯 Weekly Discussion Sessions

We host weekly discussion sessions where you can:

  • Discuss issues and bugs
  • Share ideas about future directions
  • Get general consultations and support
  • Connect with the development team and community

📅 Schedule: Friday nights, 8-9 PM PST
🔗 Zoom Link: https://ucsd.zoom.us/j/96278791276

📱 WeChat Group

You can add the account ari_asm so that I can add you to the group chat.

Acknowledgement

We would like to thank Letta for open-sourcing their framework, which served as the foundation for the memory system in this project.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jl_ecms_server-0.63.0.tar.gz (554.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jl_ecms_server-0.63.0-py3-none-any.whl (676.7 kB view details)

Uploaded Python 3

File details

Details for the file jl_ecms_server-0.63.0.tar.gz.

File metadata

  • Download URL: jl_ecms_server-0.63.0.tar.gz
  • Upload date:
  • Size: 554.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for jl_ecms_server-0.63.0.tar.gz
Algorithm Hash digest
SHA256 c341b195042b811fc11bfbbafc47621d54ff9ad96b3481388795626b29ca2307
MD5 eb441e873b2956f68ea2ccf75b6aba79
BLAKE2b-256 60e76ee1bf4b4fc7e84c7e16c14caa57de86bd078753398ce0355efb9f4f1a89

See more details on using hashes here.

File details

Details for the file jl_ecms_server-0.63.0-py3-none-any.whl.

File metadata

File hashes

Hashes for jl_ecms_server-0.63.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6b10e30f9fd915084e59a037188efe7c18eed266d325b5e498c0f02b0ed10b95
MD5 b05667c05498d6b92ab3c423cb78f25e
BLAKE2b-256 5c489d2415a0cc4c3b16a3711449c62dc84c530d227eeb29c4f0ca28e4078327

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page