AI agent brain with memory, teams, flows, document ingestion, and MCP — your agent, but better every day
Project description
AIBrain — Your AI agent that remembers, learns, and acts
80.0% recall on LongMemEval M with a 109M model. 99.8% on MSDialog. Zero-parameter FTS5 achieves 96.9% NDCG@5 on dialogue retrieval. All on a consumer laptop, no GPU required. One install. 80 workflows. Agent teams. Flow engine. Document ingestion. Universal MCP. Dual-system memory that compounds across sessions. Runs locally, no cloud lock-in.
AIBrain is a self-hosted operating system for AI agents. It gives any agent persistent memory, typed Agent/Task/Team composition, a decorator-driven Flow engine, document ingestion, universal MCP client connectivity, a reactive workflow engine, a Complementary Learning Systems (CLS) cognitive substrate with a weekly consolidation cycle, multi-model LLM routing, an approval queue, inter-agent messaging, and 80 ready-to-run workflows — all behind a 42-page Next.js dashboard. Deploy it on a laptop, a VPS, or in Docker; your agent carries its entire brain with it.
Why AIBrain?
Most AI memory systems are toys. They store everything, retrieve nothing useful, and require expensive GPUs to run. AIBrain is different:
- Verified retrieval performance. On LongMemEval M (500 instances, the standard benchmark for long-term conversational memory), AIBrain's SelRoute system achieves Ra@5 = 0.800 with a 109M bge-base model — beating the strongest published baseline (Contriever + LLM fact keys, 0.762) by +0.038 on recall and +0.180 on NDCG@5. A 22MB MiniLM model achieves Ra@5 = 0.785, statistically equivalent to models 50% larger. The zero-parameter FTS5 baseline (zero trainable parameters, zero GPU) achieves NDCG@5 = 0.692 on LongMemEval M, exceeding every published system including 1.5B-parameter models.
- Near-perfect on domain-specific retrieval. On MSDialog (2,199 tech-support dialogues), AIBrain achieves Ra@5 = 0.998 with a 22MB MiniLM model — near-perfect retrieval for technical support contexts.
- Zero-parameter dialogue retrieval. On LMEB dialogue (840 instances), the FTS5 zero-ML retriever achieves NDCG@5 = 0.971 — no neural parameters, no GPU, no training data.
- Total evaluation instances: 62,792+. Every number is from verified JSON files in the benchmarks/ directory. The methodology is described in the peer-reviewed SelRoute paper (McKee, 2026, arXiv:2604.02431).
- All benchmarks run on a consumer laptop. No GPU required. No cloud credits. No special hardware.
The secret is the CLS architecture: a Complementary Learning Systems dual-system memory inspired by the mammalian brain. Every session writes to fast hippocampal memory. A weekly aibrain dream consolidation cycle slow-extracts patterns and upgrades routing weights. The brain gets measurably better at subsequent tasks — not just stores more.
What's New in v1.5.35
- Budget schema migrations —
budget_policiestable (scope_type, scope_id, window, limit_cents, warn_percent, hard_stop, is_active) added as migration v2;budget_incidentsadded as migration v3. Schema is applied automatically on first run. - Event bus drain —
drain(timeout=5.0)added toevent_bus.pyusingconcurrent.futures.wait(), fixing a harness task-complete timing race that could drop events under load. - Test suite — 23 budget tests + 8 event bus tests now passing.
Previously in v1.5.18: Honest cost tracking, Dream consolidation (CLS REM phase), rubric signal fixes, metrics sentinel handling, CLI help expanded to 28 new commands, dashboard empty-state guidance.
Previously in v1.5.17: litellm added to core dependencies, AIBRAIN_ENV default changed to development, dashboard directory clarified, dashboard setup hint in CLI.
Previously in v1.5.16: Security hardening — IPv4-mapped IPv6 SSRF bypass fixed, startswith() path-traversal boundary tightened, KG foreign-key constraint set to ON DELETE SET NULL, import db_path no-op fixed. MCP server lazy-load fix.
Previously in v1.5.14: Temporal knowledge graph, local-first conversation history import (aibrain import), Cursor plugin, n8n community node, Supabase backend, Boss Agent SQLite persistence, Bolt.diy and Base44 starter templates.
Previously in v1.5.12: 16 framework adapters (LangChain, CrewAI, AutoGen, Haystack, and 12 more), Windows NSIS installer, auto-updater + backend watchdog, full dark/light mode WCAG 2.1 AA, starter memories, OAuth PKCE flow, Goals slide-over, memory lifecycle hooks.
Introduced in v1.5.0: Graph memory, vault citations, data classification — SQLite relations table with BFS path-finding, memory_id citations on every recall, SECRET/SENSITIVE routing to local Ollama.
Install
Pick the path that matches your environment. All paths install the same package from PyPI.
One-line installer (macOS / Linux / WSL)
curl -sSL https://myaibrain.org/install | sh
Creates an isolated venv at ~/.aibrain/venv, pip-installs aibrain, and symlinks the CLI into /usr/local/bin (or ~/.local/bin fallback). Re-run any time to upgrade. Python 3.10+ required.
One-line installer (Windows PowerShell)
irm https://myaibrain.org/install.ps1 | iex
Creates an isolated venv at %USERPROFILE%\.aibrain\venv, pip-installs aibrain, and adds the venv Scripts dir to your user PATH. Python 3.10+ required.
Homebrew (macOS / Linux)
brew tap sindecker/tap
brew install aibrain
Installs into a Homebrew-managed venv and symlinks the CLI.
pip (any platform)
pip install aibrain
Docker
docker pull sindecker/aibrain:latest
Quick Start
# Install
pip install aibrain
# Initialize your brain
aibrain init
# Start the server
aibrain serve
# Open the dashboard
open http://localhost:3000
Your agent now has persistent memory. Every conversation, every workflow, every decision is stored and retrievable. Run aibrain dream weekly to consolidate patterns and improve retrieval.
Benchmark Results
AIBrain's SelRoute retrieval system has been evaluated on 62,792+ instances across multiple benchmarks. All results are from verified JSON files in the benchmarks/ directory.
LongMemEval M (500 instances)
| System | Parameters | Ra@5 | NDCG@5 |
|---|---|---|---|
| SelRoute bge-base (metadata routing) | 109M | 0.800 | 0.812 |
| SelRoute bge-small (metadata routing) | 33M | 0.786 | 0.718 |
| SelRoute FTS5 (zero-ML, zero-GPU) | 0 | 0.745 | 0.692 |
| all-MiniLM-L6-v2 | 22M | 0.785 | 0.717 |
LongMemEval S (500 instances)
| System | Parameters | Ra@5 |
|---|---|---|
| SelRoute bge-base | 109M | 0.920 |
| SelRoute Oracle | — | 0.992 |
MSDialog (2,199 tech-support dialogues)
| System | Parameters | Ra@5 |
|---|---|---|
| SelRoute MiniLM | 22M | 0.998 |
LoCoMo (1,986 QA pairs)
| System | Parameters | Recall@5 | Ra@5 |
|---|---|---|---|
| SelRoute FTS5 (zero-ML) | 0 | 0.859 | 0.767 |
QReCC (52,678 conversational queries)
| System | Parameters | MRR |
|---|---|---|
| SelRoute FTS5+reasoning | 0 | 51.66 |
LMEB dialogue (840 instances)
| System | Parameters | NDCG@5 |
|---|---|---|
| SelRoute FTS5 (zero-ML) | 0 | 0.971 |
Key findings:
- A 22MB MiniLM model achieves Ra@5 = 0.785 on LongMemEval M — competitive retrieval with a model that fits in RAM on any device.
- A zero-parameter FTS5 retriever achieves NDCG@5 = 0.971 on LMEB dialogue — no neural parameters, no GPU, no training data.
- All benchmarks run on a consumer laptop. No GPU required.
Architecture
Complementary Learning Systems (CLS)
AIBrain implements a dual-system memory architecture inspired by the mammalian brain:
- Hippocampal fast encoding. Every session writes immediately to short-term memory. No indexing delay, no batch processing. Your agent remembers what just happened.
- Neocortical consolidation. A weekly
aibrain dreamcycle slow-extracts patterns from accumulated sessions, upgrades routing weights, and consolidates long-term knowledge. The brain gets measurably better at subsequent tasks. - SelRoute routing. The SelRoute system (arXiv:2604.02431) routes each query to the optimal retrieval strategy — dense embedding, sparse FTS5, or hybrid — based on query characteristics. This is what enables a 22MB model to match 1.5B-parameter systems.
Boss Agent
Multi-agent orchestration with one orchestrator and multiple isolated workers sharing a single brain. Each worker has its own context, memory, and tool access, but all share the same persistent knowledge base.
Companies / RBAC
Full organizational hierarchy — agents, tasks, roles, and approval flows. Manage team access, delegate tasks, and enforce governance policies.
Brain Marketplace
Share or sell trained brains via git. Export your brain, push it to a repository, and let others import it. Brains carry learned patterns, routing weights, and consolidated knowledge.
Satellite DBs
Federated search across multiple brain instances. Query one brain and get results from all connected brains.
Pricing
| Tier | Price | Features |
|---|---|---|
| Free | $0 | Unlimited local usage. All features. No cloud dependency. |
| Pro | $9.95/mo | Priority support, early access to new features, cloud sync. |
| Team | $29.95/mo | Everything in Pro, plus RBAC, audit logs, dedicated support. |
All tiers include the same core AIBrain software. The difference is support level and cloud features.
CLI Entrypoints
aibrain— Main CLIaibrain-server— Start the backend serveraibrain-mcp— MCP serveraibrain-compress— SelRoute compression library (50-99% token savings on git/build/test output)aibrain-settings— Configure AIBrainaibrain-demo— Run a demo
License
Proprietary. See LICENSE file for details.
Contributing
See CONTRIBUTING.md for development setup and contribution guidelines.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aibrain-1.5.35.tar.gz.
File metadata
- Download URL: aibrain-1.5.35.tar.gz
- Upload date:
- Size: 4.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2a2ddb9e9eb15d1a63d805a5c02dbb6d1441d9f9df89ecb6cc6d1f74963781de
|
|
| MD5 |
0b7d345d366679a519f7fc8b18444665
|
|
| BLAKE2b-256 |
7371a392de02e0b554368584da2e6418df9c7b86418ee8fdf8bc6c48c4caa498
|
File details
Details for the file aibrain-1.5.35-py3-none-any.whl.
File metadata
- Download URL: aibrain-1.5.35-py3-none-any.whl
- Upload date:
- Size: 4.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68eac1473feae0bae8c4d9185ec99f6054177c2e9e483eb669acacd19cb2bf35
|
|
| MD5 |
f9a936ed8ca6de1f2902cf1cdd8447bd
|
|
| BLAKE2b-256 |
ae2f23f2574464014bbe25885e2666852a0d21ae398b7198f758df3a990c0199
|