Skip to main content

A universal interface for AI agents with persistent memory, where every conversation has a home

Project description

mindroom

PyPI Python Tests Build Docs License Downloads GitHub

MindRoom Logo

Your AI is trapped in apps. We set it free.

AI agents that learn who you are shouldn't forget everything when you switch apps. MindRoom agents follow you everywhere—Slack, Telegram, Discord, WhatsApp—with persistent memory intact.

Deploy once on Matrix. Your agents now work in any chat platform via bridges. They can even visit your client's workspace or join your friend's group chat.

Self-host for complete control or use our encrypted service. Either way, your agents remember you and can collaborate across organizations.

The Problem

Every AI app is a prison:

  • ChatGPT knows your coding style... but can't join your team's Slack
  • Claude understands your writing... but can't access your email
  • GitHub Copilot helps with code... but can't see your project specs
  • You teach each AI from scratch, over and over

Meanwhile, your human team collaborates across Slack, Discord, Telegram, and email daily. Why can't your AI?

The Solution

MindRoom agents:

  • Live in Matrix - A federated protocol like email
  • Work everywhere - Via bridges to Slack, Telegram, Discord, WhatsApp, IRC, email
  • Remember everything - Persistent memory across all platforms
  • Collaborate naturally - Multiple agents working together in threads
  • Respect boundaries - You control which agent sees what data

Built on Proven Infrastructure

MindRoom leverages the Matrix protocol, a decade-old open standard with significant real-world adoption:

Foundation

  • 10+ years of development by the Matrix.org Foundation
  • €10M+ invested in protocol development
  • 100+ developers contributing to the core ecosystem
  • 35+ million users globally

Enterprise Validation

  • German Healthcare: 150,000+ organizations using Ti-Messenger
  • French Government: 5.5 million civil servants on Tchap
  • Military Adoption: NATO, U.S. Space Force, and other defense organizations
  • GDPR Compliant: Built for European privacy standards

What This Means For You

By building on Matrix, MindRoom inherits:

  • Production-tested federation across organizations
  • Military-grade E2E encryption (Olm/Megolm)
  • Professional clients (Element, FluffyChat, Cinny)
  • 50+ maintained bridges to other platforms
  • Proven scale and reliability

This foundation allows MindRoom to focus entirely on agent orchestration and intelligence, rather than reimplementing communication infrastructure.

See It In Action

Monday, in your Matrix room:
You: @assistant Remember our project uses Python 3.11 and FastAPI

Tuesday, in your team's Slack (via bridge):
Colleague: What Python version are we using?
You: @assistant can you help?
Assistant: [Joins from Matrix] We're using Python 3.11 with FastAPI

Wednesday, in client's Telegram (via bridge):
Client: Can your AI review our API spec?
You: @assistant please analyze this
Assistant: [Travels from your server] I'll review this against our FastAPI patterns...

One agent. Every platform. Continuous memory.

The Magic Moment - Cross-Organization Collaboration

Thursday, your client asks in their Discord:
Client: Can our architect AI review this with your team?
You: Sure! @assistant please collaborate with them

Your Assistant: [Joins from your Matrix server]
Client's Architect AI: [Joins from their server]
Together: [They review architecture, sharing context from both organizations]

Two AI agents from different companies collaborating. This is impossible with ChatGPT, Claude, or any other platform.

But It Gets Better - Your Agents Work as a Team

Friday, planning next sprint:
You: @research @analyst @writer Create a competitive analysis report
Research: I'll gather data on our top 5 competitors...
Analyst: I'll identify strategic patterns and opportunities...
Writer: I'll compile everything into an executive summary...
[They work together, transparently, delivering a comprehensive report]

Key Features

🧠 Dual Memory System

  • Agent Memory: Each agent remembers conversations, preferences, and patterns across all platforms
  • Room Memory: Contextual knowledge that stays within specific rooms (work projects, personal notes)

🤝 Multi-Agent Collaboration

You: @research @analyst @email Create weekly competitor analysis reports
Research: I'll gather competitor updates
Analyst: I'll identify strategic patterns
Email: I'll compile and send every Friday
[They work together, automatically, every week]

💬 Direct Messages (DMs)

  • Agents respond naturally in 1:1 DMs without needing mentions
  • Add more agents to existing DM rooms for collaborative private work
  • Complete privacy separate from configured public rooms

🔐 Intelligent Trust Boundaries

  • Route sensitive data to local Ollama models on your hardware
  • Use GPT-5.2 for complex reasoning
  • Send general queries to cost-effective cloud models
  • You decide which AI sees what

🔌 100+ Integrations

Gmail, GitHub, Spotify, Home Assistant, Google Drive, Reddit, weather services, news APIs, financial data, and many more. Your agents can interact with all your tools. Native Matrix tools include matrix_message, matrix_room, thread_tags, and matrix_api for room, thread, event, state, and room-search operations.

📅 Automation & Scheduling

  • Daily check-ins from your mindfulness agent
  • Scheduled reports and summaries
  • Event-driven workflows (conditional requests converted to polling schedules)
  • Background tasks with human escalation

Who This Is For

  • Teams using Matrix/Element - Add AI to your existing secure infrastructure without migration
  • Open Source Projects - Agents that remember all decisions and can visit contributor chats
  • Consultants & Agencies - Your AI can securely join client workspaces
  • Privacy-Focused Organizations - Self-host everything, own your data completely
  • Developers - Build on our platform, contribute agents, extend functionality

Quick Start

Prerequisites

  • Python 3.12+
  • uv for Python package management
  • Node.js 20+ and bun (optional, for web UI)

Fastest Path: Hosted Matrix + Local MindRoom (uvx only)

Use this path if you want to run MindRoom locally while using hosted chat + Matrix on mindroom.chat.

# Create ~/.mindroom/config.yaml and ~/.mindroom/.env with hosted defaults
uvx mindroom config init --profile public

# Add model auth, or use `--profile public-codex` and run `codex login`
$EDITOR ~/.mindroom/.env

# Generate pair code in https://chat.mindroom.chat:
# Settings -> Local MindRoom -> Generate Pair Code
uvx mindroom connect --pair-code ABCD-EFGH

# Start MindRoom
uvx mindroom run

See Hosted Matrix deployment guide for full details.

Installation and starting

# Clone and install
git clone https://github.com/mindroom-ai/mindroom
cd mindroom
uv sync

MindRoom auto-installs the fully local sentence-transformers embedder runtime on first use when memory.embedder.provider: sentence_transformers is configured. Install uv sync --extra matrix_e2ee if you need Matrix E2EE support in encrypted rooms.

# Start MindRoom (agents + API + web dashboard)
uv run mindroom run

The web interface will be available at http://localhost:8765 When running from a source checkout, MindRoom will build the dashboard assets on first start if Bun is available.

First Steps

In any Matrix client (Element, FluffyChat, etc):

You: @mindroom_assistant What can you do?
Assistant: I can coordinate our team of specialized agents...

You: @mindroom_research @mindroom_analyst What are the latest AI breakthroughs?
[Agents collaborate to research and analyze]

How Agents Work

Agent Response Rules

Agents respond using Matrix thread relations to keep conversations organized. If your client or bridge only sends plain replies, MindRoom keeps them in an existing thread when the reply chain eventually reaches a threaded ancestor or proven thread root. Plain replies that never reach threaded context still stay plain replies.

  1. Mentioned agents always respond - Tag them to get their attention
  2. Single agent continues - One agent in thread? It keeps responding
  3. Multiple agents collaborate - They work together, not compete
  4. Smart routing - System picks the best agent for new threads

Available Commands

  • !help [topic] - Get help
  • !reload-plugins - Reload configured plugins (admin only)
  • !schedule <task> - Schedule a task
  • !list_schedules - List scheduled tasks
  • !cancel_schedule <id> - Cancel a scheduled task
  • !edit_schedule <id> <task> - Edit an existing scheduled task
  • !config <operation> - Manage configuration
  • !hi - Show welcome message

Note for Self-Hosters

This repository contains everything you need to self-host MindRoom. The saas-platform/ directory contains infrastructure and code specific to running MindRoom as a hosted service and can be safely ignored by self-hosters.

Configuration

Basic Setup

  1. Create config.yaml (for example):
agents:
  assistant:
    display_name: Assistant
    role: A helpful AI assistant
    model: default
    rooms: [lobby]
    accept_invites: true  # Optional: accept authorized ad-hoc room invites

models:
  default:
    provider: anthropic
    id: claude-sonnet-4-6

mindroom_user:
  username: mindroom_user  # Set this before first run; username is immutable after bootstrap
  display_name: MindRoomUser

defaults:
  markdown: true
  compress_tool_results: false       # Safer default; enabling can invalidate Anthropic/Vertex Claude prompt caches
  # Auto-compaction is disabled until you author a compaction block.
  # compaction:
  #   enabled: true
  #   threshold_percent: 0.8
  #   reserve_tokens: 16384
  max_tool_calls_from_history: null  # Limit tool call messages replayed from history (null = no limit)
  num_history_runs: null             # Number of prior runs to include (null = all)
  thread_summary_first_threshold: 1  # First automatic summary after 1 thread message
  thread_summary_subsequent_interval: 10  # Re-summarize after each additional 10 messages

Add the thread_summary tool to an agent when you want it to write or refresh the one-line summary shown for a Matrix thread. set_thread_summary uses the current resolved thread context by default. Outside a resolved thread context, pass thread_id explicitly.

compress_tool_results now defaults to false. On Anthropic and Vertex Claude models, enabling it can mutate replayed tool messages and invalidate prompt-cache prefixes. Only re-enable it when the context savings matter more than prompt-cache reuse.

agents:
  assistant:
    tools:
      - matrix_message
      - thread_summary

Auto-compaction is destructive inside the active session. It uses one Matrix lifecycle notice that is edited in place. It runs before a reply only when needed for that reply, and otherwise runs immediately after a successful reply when the updated session crosses the threshold. It rewrites the stored session summary and removes the compacted raw runs from the live session so Agno replays only the merged summary plus the remaining recent runs.

  1. Configure your Matrix homeserver and API keys (optional, defaults shown):
export MATRIX_HOMESERVER=https://your-matrix.server
export ANTHROPIC_API_KEY=your-key-here
# Optional: protect dashboard API endpoints (recommended for non-localhost)
# export MINDROOM_API_KEY=your-secret-key
# Optional: use a non-default config location
# export MINDROOM_CONFIG_PATH=/path/to/config.yaml

Optional Advanced Configuration

knowledge_bases:
  engineering_docs:
    path: ./knowledge_docs
    watch: false  # Direct external edits require reindex; API/dashboard mutations still schedule refresh.

agents:
  assistant:
    display_name: Assistant
    role: A helpful AI assistant
    model: default
    rooms: [lobby]
    accept_invites: true
    knowledge_bases: [engineering_docs]
    # Per-agent overrides for history/context (override defaults above):
    # compress_tool_results: true  # Re-enable only if you accept Anthropic/Vertex Claude prompt-cache invalidation
    # max_tool_calls_from_history: 5
    # num_history_runs: 10
    # compaction:
    #   enabled: true
    #   threshold_tokens: 60000  # Requires context_window on the active model or compaction.model

voice:
  enabled: true
  stt:
    provider: openai
    model: whisper-1

memory:
  backend: mem0
  embedder:
    provider: sentence_transformers
    config:
      model: sentence-transformers/all-MiniLM-L6-v2

mindroom_user:
  username: mindroom_user  # Set this before first run; username is immutable after bootstrap
  display_name: MindRoomUser

authorization:
  global_users: ["@alice:example.com"]
  room_permissions:
    "!exampleRoomId:example.com": ["@bob:example.com"]
  default_room_access: false

mindroom_user.username can only be set before the internal user account is created. After first startup, change mindroom_user.display_name if you only want a different visible name.

Deployment Options

🏠 Self-Hosted

Complete control on your infrastructure:

# Using your existing Matrix server
MATRIX_HOMESERVER=https://your-matrix.server uv run mindroom run

# Or bootstrap local Synapse + Cinny (Linux/macOS; Docker required)
mindroom local-stack-setup --synapse-dir /path/to/mindroom-stack/local/matrix
uv run mindroom run

☁️ Our Hosted Service (Coming Soon)

Zero setup, enterprise security:

  • End-to-end encrypted (we can't read your data)
  • Automatic updates and scaling
  • 99.9% uptime SLA
  • Start free, scale as needed

🔀 Hybrid

Mix and match:

  • Sensitive rooms on your server
  • General rooms on our cloud
  • Agents collaborate seamlessly across both

Architecture

Technical Stack

  • Matrix: Any homeserver (Synapse, Conduit, Dendrite, etc.)
  • Agents: Python with matrix-nio
  • AI Models: OpenAI, Anthropic, Ollama, or any provider
  • Memory: Mem0 + ChromaDB vector storage (persistent on disk)
  • UI: Web dashboard + any Matrix client

Philosophy

We believe AI should be:

  1. Persistent: Your AI should remember and learn from every interaction
  2. Ubiquitous: Available wherever you communicate
  3. Collaborative: Multiple specialists working together
  4. Private: You control where your data lives
  5. Natural: Just chat—no complex interfaces

Status

  • Production ready with 1000+ commits
  • 100+ integrations working today
  • Multi-agent collaboration with persistent memory
  • Federation across organizations and platforms
  • Self-hosted & cloud options available
  • Voice transcription for Matrix voice messages
  • Text-to-speech tools via OpenAI, Groq, ElevenLabs, and Cartesia
  • 🚧 Mobile apps in development
  • 🚧 Agent marketplace planned

Contributing

We welcome contributions! See CLAUDE.md for the current development workflow and quality checks.

From the developer of 10+ successful open source projects with thousands of users. MindRoom represents 1000+ commits of production-ready code, not a weekend experiment.

License

Acknowledgments

Built with:

  • Matrix - The federated communication protocol
  • Agno - AI agent framework
  • matrix-nio - Python Matrix client

mindroom - AI that follows you everywhere, remembers everything, and stays under your control.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mindroom-2026.4.269.tar.gz (3.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mindroom-2026.4.269-py3-none-any.whl (2.0 MB view details)

Uploaded Python 3

File details

Details for the file mindroom-2026.4.269.tar.gz.

File metadata

  • Download URL: mindroom-2026.4.269.tar.gz
  • Upload date:
  • Size: 3.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mindroom-2026.4.269.tar.gz
Algorithm Hash digest
SHA256 c8680097a71d62a1d58eaedd50a45132879c402c837ffdf5b84ea198ee2fb5ed
MD5 4da61d51379f6510065c8cf9afa084bf
BLAKE2b-256 ed30da6af86e5d88d5d11eefd7f5d941a010557936b60d4533755c199a389214

See more details on using hashes here.

Provenance

The following attestation bundles were made for mindroom-2026.4.269.tar.gz:

Publisher: release.yml on mindroom-ai/mindroom

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mindroom-2026.4.269-py3-none-any.whl.

File metadata

  • Download URL: mindroom-2026.4.269-py3-none-any.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mindroom-2026.4.269-py3-none-any.whl
Algorithm Hash digest
SHA256 8afdb7b84351e58f3f0925552ea91a0f65544cf0e880597fd136baeeb3ab7fda
MD5 3b892c365c6501c2749833feece32e02
BLAKE2b-256 6d05234066a88618f037a3d801b4834bbc196e23b0a30e214261e69bb38577d8

See more details on using hashes here.

Provenance

The following attestation bundles were made for mindroom-2026.4.269-py3-none-any.whl:

Publisher: release.yml on mindroom-ai/mindroom

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page