Skip to main content

AI agent brain with memory, teams, flows, document ingestion, and MCP — your agent, but better every day

Project description

AIBrain -- Your AI agent that remembers, learns, and acts

Stop configuring. Start automating. One install. 71 workflows. Agent teams. Flow engine. Document ingestion. Universal MCP. Dual-system memory that compounds across sessions. Runs locally, no cloud lock-in.

AIBrain is a self-hosted operating system for AI agents. It gives any agent persistent memory, typed Agent/Task/Team composition, a decorator-driven Flow engine, document ingestion, universal MCP client connectivity, a reactive workflow engine, a Complementary Learning Systems (CLS) cognitive substrate with a nightly consolidation cycle, multi-model LLM routing, an approval queue, inter-agent messaging, and 71 ready-to-run workflows -- all behind a 60-page Next.js dashboard. Deploy it on a laptop, a VPS, or in Docker; your agent carries its entire brain with it.

AIBrain License Python Tests Workflows Dashboard


What's New in v1.5.20

  • Budget schema migrations -- budget_policies table (scope_type, scope_id, window, limit_cents, warn_percent, hard_stop, is_active) added as migration v2; budget_incidents added as migration v3. Schema is applied automatically on first run.
  • Event bus drain -- drain(timeout=5.0) added to event_bus.py using concurrent.futures.wait(), fixing a harness task-complete timing race that could drop events under load.
  • Test suite -- 23 budget tests + 8 event bus tests now passing.

Previously in v1.5.18: Honest cost tracking, Dream consolidation (CLS REM phase), rubric signal fixes, metrics sentinel handling, CLI help expanded to 28 new commands, dashboard empty-state guidance.

Previously in v1.5.17: litellm added to core dependencies, AIBRAIN_ENV default changed to development, dashboard directory clarified, dashboard setup hint in CLI.

Previously in v1.5.16: Security hardening -- IPv4-mapped IPv6 SSRF bypass fixed, startswith() path-traversal boundary tightened, KG foreign-key constraint set to ON DELETE SET NULL, import db_path no-op fixed. MCP server lazy-load fix.

Previously in v1.5.14: Temporal knowledge graph, local-first conversation history import (aibrain import), Cursor plugin, n8n community node, Supabase backend, Boss Agent SQLite persistence, Bolt.diy and Base44 starter templates.

Previously in v1.5.12: 16 framework adapters (LangChain, CrewAI, AutoGen, Haystack, and 12 more), Windows NSIS installer, auto-updater + backend watchdog, full dark/light mode WCAG 2.1 AA, starter memories, OAuth PKCE flow, Goals slide-over, memory lifecycle hooks.

Introduced in v1.5.0: Graph memory, vault citations, data classification -- SQLite relations table with BFS path-finding, memory_id citations on every recall, SECRET/SENSITIVE routing to local Ollama.


Install

Pick the path that matches your environment. All paths install the same package from PyPI.

One-line installer (macOS / Linux / WSL)

curl -sSL https://myaibrain.org/install | sh

Creates an isolated venv at ~/.aibrain/venv, pip-installs aibrain, and symlinks the CLI into /usr/local/bin (or ~/.local/bin fallback). Re-run any time to upgrade. Python 3.10+ required.

One-line installer (Windows PowerShell)

irm https://myaibrain.org/install.ps1 | iex

Creates an isolated venv at %USERPROFILE%\.aibrain\venv, pip-installs aibrain, and adds the venv Scripts dir to your user PATH. Python 3.10+ required.

Homebrew (macOS / Linux)

brew tap sindecker/tap
brew install aibrain

Installs into a Homebrew-managed venv and symlinks the CLI. See the tap: https://github.com/sindecker/homebrew-tap

pip (any platform)

python3 -m venv ~/aibrain-env && source ~/aibrain-env/bin/activate
pip install aibrain

Windows note: if aibrain is not found after install, the Python Scripts directory may not be on PATH. Run python -m aibrain setup instead, or add the Scripts directory (printed at the end of setup) to PATH.

After any of the above

aibrain setup    # interactive wizard — walks you through config, API keys, DB init, workflow selection
aibrain serve    # API at http://localhost:8001 + dashboard at http://localhost:3000

The setup wizard handles config, API keys, database init, and workflow selection. No manual file editing needed.

Alternative: Docker

pip install aibrain && aibrain setup
cp config.json.example config.json   # edit with your API keys and preferences
docker compose up --build             # dashboard at http://localhost:5173 (Docker/nginx) or http://localhost:3000 (direct)

Brain commands -- export, import, and merge agent knowledge:

aibrain brain stats                    # show brain statistics
aibrain brain export                   # export brain to JSON
aibrain brain import brain.json        # import another brain's knowledge
aibrain brain merge /path/or/git-url   # merge from git repo or local path

For local development without Docker, see Development Setup below.


The Learning Loop

Every task routed through create_task → checkout_task → complete_task writes a quality score, cost record, and evolution outcome to the database. These feed the CLS consolidation cycle, which runs weekly and updates routing weights, memory associations, and model selection — making the brain measurably better at subsequent tasks. Skipping the company system doesn't save time; it starves the learning loop of the signal it needs to compound. The overhead is milliseconds. The compounding benefit is permanent.

See docs/company-tasks.md for the full flow, code examples, and when not to create a task.


Agent Teams

Define agents, give them tasks, run them as a team:

from aibrain import Agent, Task, Team

researcher = Agent(
    role="Research Analyst",
    goal="Find accurate, current information",
    backstory="Senior analyst with 10 years of market research experience",
)

writer = Agent(
    role="Technical Writer",
    goal="Produce clear, actionable reports",
    backstory="Former journalist who translates complex topics into plain English",
)

research_task = Task(
    description="Research the current state of MCP adoption across AI tools",
    expected_output="A structured summary with key findings and trends",
    agent=researcher,
)

report_task = Task(
    description="Write a 500-word briefing from the research findings",
    expected_output="A concise report suitable for a technical audience",
    agent=writer,
)

team = Team(
    agents=[researcher, writer],
    tasks=[research_task, report_task],
    process="sequential",  # or "hierarchical"
)

result = team.kickoff()
print(result.raw)

Flow Engine

Build state machines with decorators:

from aibrain.core.flow import Flow, start, listen, router
from pydantic import BaseModel

class ReviewState(BaseModel):
    document: str = ""
    score: float = 0.0
    approved: bool = False

class ReviewFlow(Flow[ReviewState]):

    @start()
    def load_document(self):
        self.state.document = "quarterly report content..."
        return self.state.document

    @listen(load_document)
    def score_document(self):
        self.state.score = 0.85
        return self.state.score

    @router(score_document)
    def decide(self):
        if self.state.score >= 0.8:
            return "approve"
        return "reject"

    @listen("approve")
    def approve(self):
        self.state.approved = True
        return "Document approved"

    @listen("reject")
    def request_revision(self):
        return "Revision needed"

flow = ReviewFlow()
result = flow.kickoff()

Document Ingestion

Feed documents directly into your brain:

from aibrain import Brain

brain = Brain()

# Ingest a single file
brain.ingest("report.pdf")
brain.ingest("data.csv")
brain.ingest("notes.md")

# Ingest an entire directory
brain.ingest("/path/to/documents/")

# Search across ingested content
results = brain.search("quarterly revenue")

Supports PDF, CSV, Excel, JSON, JSONL, Text, Markdown, and directories. Sentence-aware chunking with configurable overlap. Incremental -- re-running skips already-processed files.


Universal MCP Client

Connect to any MCP server and use its tools:

from aibrain.mcp import MCPServerRegistry, MCPServerStdio, MCPServerHTTP

registry = MCPServerRegistry()

# Add a stdio-based MCP server
registry.register("github", MCPServerStdio(
    command="npx",
    args=["@github/mcp-server"],
))

# Add an HTTP-based MCP server
registry.register("custom", MCPServerHTTP(
    url="http://localhost:9000/mcp",
))

# List all registered servers and their tools
for name, server in registry.servers.items():
    print(f"{name}: {server}")

Three transports: stdio, HTTP, and SSE. Tool bridging lets you call remote MCP tools as local Python functions.


Features

Agent/Task/Team System -- Typed multi-agent composition. Define agents with role, goal, and backstory. Create tasks with expected output, tools, and guardrails. Run teams with sequential or hierarchical processes. Agents can delegate to each other. Failed tasks retry automatically with configurable limits.

Flow Engine -- Decorator-driven state machines for complex workflows. @start, @listen, and @router decorators define execution graphs with conditional routing. Typed state objects via Pydantic. Persistence for long-running flows. Human-in-the-loop pause/resume. @retry and @timeout decorators. Flow visualization.

Document Ingestion -- brain.ingest("file.pdf") feeds documents directly into memory. 7 source types: PDF, CSV, Excel, JSON, JSONL, Text, Markdown, and full directories. Sentence-aware chunking with configurable overlap. Incremental ingestion skips already-processed files.

Conversation History Import -- aibrain import <file> ingests ChatGPT and Claude.ai conversation exports directly into the brain. Auto-detects format from the JSON structure. MiniLM embeddings run locally -- no OpenAI key required. Filter by date range, preview with --dry-run, and cap ingestion size with --max N.

Temporal Knowledge Graph -- SQLite triple store with temporal validity ranges. Store facts with valid_from and valid_until timestamps, query the graph at any point in time, and surface how entity relationships evolve across sessions. Layered on top of the core relations table; backward-compatible with existing graph data.

n8n Integration -- AIBrain community node for n8n. Read and write memories natively from n8n workflows without a custom HTTP node.

Cursor Plugin -- AIBrain memory MCP bundle ships as a Cursor extension. Gives Cursor persistent memory backed by a local AIBrain server with no additional configuration.

Supabase Backend -- pip install aibrain[supabase] swaps the local SQLite store for a hosted Supabase Postgres instance. Migration SQL is bundled; the switch is a single config change.

Universal MCP Client -- Connect to any MCP server from Python. MCPServerRegistry manages multiple servers. Three transports: stdio, HTTP, and SSE. Tool bridging calls remote MCP tools as local Python functions.

60-Page Dashboard -- Home, Memories, Knowledge Graph, Workflows, Visual Workflow Builder, Content Pipeline, Chat, Companies (tasks, agents, teams, goals, org), Schedules, Activity, Logs, Costs, MCP Hub, Skills, Superworker, Ingest, Backups, Settings, Routing, Satellites, Evolution, Executions, Proficiency, Profile, Mesh, Notifications, Analytics, Audit, Builder, Buildit, Brain Health, Event Bus, Flows, Feedback, and more. Every aspect of your agent is visible and controllable from the browser.

71 Automated Workflows -- Pre-built workflows across six registries (core, skill, user, content, ops, custom) covering productivity, research, communication, devops, consolidation cycles, code analysis, security, content generation, multi-agent ops, and business metrics. Includes Email Triage, ArXiv Tracker, Autonomous Researcher, Cross-Domain Connector, Burnout Detector, Content Batch Scheduler, Brain Sync Checker, Intel Agent, Dream Consolidation, Evolution Auto-Loop, and more. Enable any workflow with a single toggle; schedules are fully configurable via cron expressions.

Multi-Model LLM Router -- Route tasks to local Ollama (free) or cloud Claude/GPT with automatic fallback via litellm. Track costs per model, per workflow, per day. Test provider connections directly from Settings. This routing layer (SelRoute) is a structural advantage that compounds over time: as each hardware generation makes local inference cheaper and faster, the $0-cost routing tier expands automatically -- without any changes to AIBrain. Every improvement in open-weight models increases the share of work AIBrain can run at zero marginal cost. We don't need to build better models. We route to whoever does.

Natural Language Workflow Creation -- Describe an automation in plain English ("Every morning check my email and summarize it") and AIBrain generates the workflow script and schedule.

Visual Workflow Builder -- Canvas-based drag-and-drop node editor. Four node types (trigger, action, condition, output) with 15 subtypes. Connect nodes with bezier curves, auto-layout with topological sort, save/load named workflows, export as JSON. Zero external dependencies -- pure canvas rendering.

Knowledge Graph with Entity Extraction -- Visualize memory connections as an interactive force-directed graph. Pattern-based NER automatically extracts technologies, URLs, emails, IPs, and proper names. Edges show references, shared tags, topic similarity, and entity connections with color-coded types and hover labels.

Command Palette (Ctrl+K) -- Global search across pages, actions, memories, workflows, and skills. Prefix with > for unified data search. Arrow-key navigation, instant routing.

Real-time Notifications -- WebSocket push for approval requests, activity events, and system alerts. Browser desktop notifications when approvals arrive. Live activity feed with auto-reconnect. Tab title shows pending approval count.

Cost & Observability Dashboard -- See exactly what your agent did, how much it cost, and where it failed. Trace every LLM call with token counts, latency, and cost. Set daily/monthly budgets with visual progress bars and overspend warnings.

Mobile PWA -- Install on your phone, pair via QR code, approve actions with one tap. Replaces Telegram as your mobile interface.

SQLite + FTS5 Memory -- Full-text searchable memory database with tagging, typing, and graph visualization. Export/import your agent's entire brain as JSON with one click from the dashboard.

Skill Marketplace -- Browse installed skills by category (Security, Development, Design, Automation, Media, Data, Communication). Install new skills from URL or pasted Markdown. Delete skills you no longer need. See which skills are connected to workflows and which are orphaned.

Built-in Scheduler -- APScheduler-backed cron engine reads scheduled_jobs.json on startup. Add, enable, disable, or trigger workflows from the UI or API. Run logs and next-fire times are always visible.

Approval Queue -- Workflows can request human approval before executing sensitive actions. Approve or reject from the dashboard or chat panel. WebSocket push keeps every client in sync with real-time desktop notifications.

Real-time Chat -- WebSocket chat panel with built-in commands (/status, /approvals, /schedule, /costs, /content, /create, /help). Messages are logged and accessible via REST for non-WebSocket clients.

Deep Health Checks -- /health/deep tests every subsystem: memory DB, scheduler, approvals, LLM providers, entity extractor, trace DB, and WebSocket connections. Results displayed in Settings.

Content Pipeline -- Full content management dashboard for creating, scheduling, and publishing across 7 platforms (LinkedIn, X/Twitter, Instagram, YouTube, Telegram, Email Newsletter, Blog). AI-powered content generation, queue and calendar views, platform-specific badges, and status tracking from draft through publication.

Home Dashboard -- Unified overview showing memory count, active workflows, pending approvals, LLM costs, recent activity, and upcoming scheduled jobs. Quick-action buttons navigate to any section.

Multi-Agent Mesh -- Broker integration and peer registry let multiple agents communicate. Register peers, send messages through the broker, and monitor connectivity from the Agents view.

Webhook Event Triggers -- Register incoming webhooks with optional HMAC signature validation. Webhooks trigger workflows on external events (GitHub pushes, payment notifications, CI results) as an alternative to polling on a schedule.

System Logs -- Terminal-style real-time log viewer with auto-scroll, pause/resume, level filtering (INFO/WARNING/ERROR/DEBUG), text search, and export to .txt.

API Playground -- Postman-style interactive endpoint tester with 17 pre-configured endpoints, query param inputs, JSON body editor, and response viewer with timing.

Backup Manager -- Create, restore, download, and delete memory snapshots. One-click brain export/import with file upload.

Task Runner -- Visual task queue with progress bars, duration tracking, cancel/retry actions, and live status updates.

Integrations Hub -- Central view of 12 services across 6 categories (Messaging, AI, Dev, Automation, Commerce, Finance) with connected/disconnected status and inline config.

Onboarding Tour -- 8-step guided walkthrough for first-time users with progress dots, skip option, and localStorage persistence.

Keyboard Shortcuts -- Press ? for a full shortcut overlay. N toggles the notification center. Ctrl+K opens the command palette.

Responsive Dashboard -- The frontend works on any screen size. REST endpoints for chat and approvals work without WebSocket support.

Reactive Workflow Engine -- State-driven workflow execution inspired by the FlyWire fruit fly connectome and the Rete algorithm. Nodes fire when prerequisites are met, not in sequence. 180 lines, domain-agnostic, async-native. Forward chaining, refraction, priority-based conflict resolution, and SQLite-backed checkpoint recovery. Every workflow is a reactive graph, not a script.

Evolution Engine -- Compounding learning loop over a Complementary Learning Systems (CLS) substrate. Outcomes feed patterns, patterns generate hypotheses, hypotheses run experiments, experiments get measured and kept or reverted. Self-criticism, loop detection, and positive pattern reinforcement. Paired with the consolidation cycle (forgetting + consolidation are the two halves of the same system), the brain compounds across sessions rather than just growing.

Pattern Bus -- Cross-domain event infrastructure. Every state change across every domain emits a DomainEvent to a SQLite-backed event table. Aggregator nodes detect anomalies and correlations. The foundation for self-initiated agent reasoning.

Inter-Agent Messaging -- Redis-backed pub/sub protocol for multi-agent coordination. Publish/subscribe events, request-response between agents, broadcast to all. Deploy one agent locally and another on a VPS -- they communicate through the broker.

Durable Workflow Execution -- Step-level retry with SQLite persistence. If a workflow fails mid-step, it resumes from the last successful step on restart. No external dependencies -- built on top of the reactive engine's checkpoint store.

Docker Production Build -- Multi-stage Dockerfile (Python 3.11 + Node 20) produces a single container running the API, scheduler, and nginx-served frontend with Redis broker for inter-agent messaging. Health checks and automatic restarts included.

Cross-platform Launchers -- start.sh (Unix/Mac), start.bat (Windows), and Makefile targets for dev, install, build, docker, and clean.

Tool Hooks -- @before and @after interceptors on any tool or LLM call. Log, filter, enforce policy, or transform inputs/outputs without touching the tool itself.

33 Event Types -- Full lifecycle observability. Agent, team, flow, tool, memory, and system events fire automatically. Subscribe to any event for custom reactions.

Tiered Retrieval (smart_search) -- Queries route through category summaries, SelRoute reranking, memory deduplication, explainable recall, and salience scoring. The best result path is chosen automatically.

Experiment & Regression Testing -- Built-in A/B framework for workflows, retrieval strategies, and agent configurations. Track metrics, compare variants, detect regressions.

15 Built-in Tools -- Ready-to-use tools for common agent tasks, available to any agent or workflow.

8 LLM Adapters -- Connect to Claude, GPT, Ollama, and more through a unified adapter interface.

Security Hardened -- Full audit with 24 findings fixed. ShellTool allowlisted, PythonREPL sandboxed, SSRF protection, CSRF middleware, secrets vault with Fernet encryption.


Why AIBrain Gets Stronger Over Time

Local inference cost falls roughly 2x every 18 months -- the hardware curve Jensen Huang describes every time NVIDIA ships a new generation. AIBrain's SelRoute routing layer sits directly on top of that curve. Today, SelRoute sends cheap, deterministic tasks (summarization, classification, formatting) to local models at $0. As each hardware generation arrives, the boundary of what counts as "cheap enough to run locally" expands. More task types tip over into the $0 tier without any change to AIBrain's code.

The compounding effect runs in three layers. First, the brain itself (memory + learned patterns) gets more valuable as it accumulates sessions -- that growth is independent of models. Second, SelRoute's routing decisions improve as it learns which task types belong on which tier. Third, as local models improve, the routing intelligence can delegate harder tasks locally, shrinking cloud spend further.

This means AIBrain is one of the few software products with a passive architectural moat tied to hardware progress. Every chip generation is a de facto product upgrade. We don't need to build better models. We route to whoever does.


Architecture

                          +------------------+
                          |     Browser      |
                          +--------+---------+
                                   |
                          HTTP / WebSocket
                                   |
                     +-------------+-------------+
                     |         nginx :5173        |
                     |   static frontend + proxy  |
                     +-------------+-------------+
                                   |
                            /api/* | /ws/*
                                   |
    +------------------------------+------------------------------+
    |                  FastAPI Backend :8001                       |
    |                                                             |
    |  +----------+  +-----------+  +-------+  +--------------+  |
    |  | Memory   |  | Reactive  |  | Chat  |  | Evolution    |  |
    |  | (SQLite  |  | Engine    |  | (WS)  |  | Engine       |  |
    |  |  + FTS5) |  | (Rete +   |  |       |  | (self-       |  |
    |  |          |  |  FlyWire) |  |       |  |  improving)  |  |
    |  +----+-----+  +-----+-----+  +---+---+  +------+-------+  |
    |       |              |             |             |           |
    |  +----+-----+  +----+------+  +---+---+  +------+-------+  |
    |  | Approval |  | Workflows |  | Agent |  | Pattern Bus  |  |
    |  | Queue    |  | (71)      |  | Broker|  | (events)     |  |
    |  +----------+  +-----------+  +---+---+  +--------------+  |
    +--------------------------------------------+-+--------------+
                |              |                 |
          aibrain.db    scheduled_jobs.json   Redis :6379

Configuration

Copy config.json.example to config.json and fill in the values you need. Empty strings use sensible defaults. All fields are optional -- configure only what you use.

Three tiers of configuration (in priority order):

  1. Environment variables: AIBRAIN_MEMORY_DB, AIBRAIN_CRON_JOBS, etc.
  2. config.json: Copy config.json.example and customize
  3. Defaults: Relative to the AIBrain root directory

Key Fields

Field Description
llm_provider LLM routing: auto, claude, openai, ollama, or claude_cli
user_name Your name (shown in greeting and reports)
budget_daily Daily LLM spending limit in USD (e.g., 5.00)
budget_monthly Monthly LLM spending limit in USD (e.g., 50.00)
agent_name Display name for your agent
anthropic_api_key API key for Claude-powered workflows
openai_api_key API key for OpenAI/GPT-powered workflows
ollama_url Ollama API URL (default: http://localhost:11434)
memory_db Path to SQLite memory database (default: memory/memory.db)
cron_jobs Path to scheduled jobs registry (default: scheduled_jobs.json)
skills_dir Directory containing agent skills
email_accounts Array of email accounts to monitor (IMAP/SMTP config per account)
email_reply_mode "prompted" (queue for approval) or "auto"
email_auto_reply_senders Whitelist of senders for auto-reply mode
location_name, location_lat, location_lon Location for weather workflows
calendar_url, google_calendar_credentials Calendar integration
github_token, github_username GitHub API access
watched_repos, github_repos Repos to monitor for activity
rss_feeds RSS/Atom feed URLs for the digest workflow
arxiv_queries Search terms for the arXiv paper tracker
job_keywords Keywords for job search workflow
social_keywords Brand/topic terms for social listening
price_watchlist Crypto/stock tickers with type, id, and display name
habits Habits to track in the evening check-in
scan_dirs Directories for file declutter workflow
document_watch_dirs Directories for document parsing workflow
uptime_targets URLs to monitor with expected HTTP status codes

Workflows

71 workflows span six registries plus standalone scripts in the workflows/ directory. Enable any workflow with a single toggle in the dashboard or by setting "enabled": true in scheduled_jobs.json.

Registry Overview

Registry File Focus
Core aibrain_workflows.py Productivity, research, communication, devops
Skill aibrain_skill_workflows.py Consolidation cycle, code review, security, intelligence
User aibrain_user_workflows.py Personal intelligence, career, knowledge, proactive alerts
Content aibrain_content_workflows.py Multi-platform content generation and scheduling
Ops aibrain_ops_workflows.py Multi-agent ops, business metrics, compliance
Scripts workflows/ directory Standalone scripts (evolution, job search, memory, SEO)

Core Workflows

Productivity, research, communication, and developer tooling.

Workflow Category Description
daily_planner Productivity Task plan from calendar, emails, and priorities
habit_tracker Productivity Evening habit check-in and streak tracking
file_declutter Productivity Clean up Downloads and Desktop folders
rss_digest Research Summarize new articles from RSS feeds
arxiv_tracker Research Track new papers matching your queries
price_watcher Research Crypto/stock price alerts on significant moves
email_triage Communication Check all email accounts, categorize messages
email_auto_reply Communication Draft and send replies (prompted or auto mode)
github_digest Developer Summarize GitHub activity across your repos
uptime_monitor Developer Check service availability, alert on downtime

Skill Workflows

Consolidation cycle (CLS slow-learning pathway), code analysis, security assessment, and research automation.

Workflow Category Description
skill_sharpener Consolidation Cycle Identify and practice weak skill areas
knowledge_gap_detector Consolidation Cycle Find gaps in the agent's knowledge base
code_review_runner Code & Architecture Automated code review with actionable feedback
tech_debt_analyzer Code & Architecture Track and prioritize technical debt
security_assessment_runner Security Run security assessments against targets
threat_model_generator Security Generate threat models for systems
autonomous_researcher Intelligence Deep-dive research on any topic
trend_detector Intelligence Detect emerging trends from data sources

User Workflows

Personal intelligence, professional development, knowledge synthesis, and proactive alerts.

Workflow Category Description
decision_journal Personal Track decisions and outcomes over time
idea_incubator Personal Develop and refine ideas with structured prompts
resume_updater Professional Auto-update resume from recent achievements
skill_gap_analyzer Professional Identify skills to develop for career goals
cross_domain Knowledge Find connections across different knowledge domains
mental_models Knowledge Build and apply mental models to problems
opportunity_detector Proactive Surface opportunities from memory patterns
burnout_detector Proactive Early warning signs from activity patterns

Content Workflows

Multi-platform content generation and performance tracking.

Workflow Category Description
linkedin_content Platform Generate LinkedIn posts from memory and context
x_twitter_content Platform Generate X/Twitter threads and posts
youtube_content Platform Generate YouTube scripts and descriptions
content_batch_scheduler Orchestration Schedule content across all platforms weekly
content_performance Orchestration Analyze content metrics and optimize strategy

Ops Workflows

Multi-agent coordination, operational health, and business metrics.

Workflow Category Description
brain_sync_checker Multi-Agent Verify brain consistency across agents
task_handoff_monitor Multi-Agent Track task handoffs between agents
workflow_health Ops Monitor workflow execution health and failures
session_handover Ops Generate session handover reports
credential_rotation Security Remind when credentials need rotation
package_downloads Business Track package download metrics
launch_readiness Business Pre-launch checklist validation

Custom Workflows

User-defined workflows for domain-specific automation. Create your own workflow files in the workflows/ directory. Examples include agent mesh operations, security scanning, analytics, and business tracking.

Workflow Category Description
agent_sync_checker Agent Mesh Verify sync state across all agents
broker_health Agent Mesh Monitor inter-agent broker connectivity
bt_comparator Security Compare scan results across runs
finding_trends Security Analyze vulnerability finding trends
custom_metrics Analytics Track custom metrics and KPIs
data_reconciler Analytics Reconcile data across sources
pypi_tracker Business Track PyPI package downloads
ship_readiness Business Product ship-readiness checklist

Run aibrain workflows to see the complete list with schedules and status.

All core workflows use free APIs only: Open-Meteo, CoinGecko, ArXiv, Hacker News, Reddit JSON. No paid API keys required for basic operation.


API Endpoints

The backend exposes a REST API at http://localhost:8001. All resource endpoints are prefixed with /api/agent unless noted otherwise.

Memory

Method Endpoint Description
GET /api/agent/memories List memories (paginated)
GET /api/agent/memories/search?q= Full-text search across all memories
GET /api/agent/memories/summary Memory count and type breakdown
GET /api/agent/memories/graph Graph nodes and edges for visualization
GET /api/agent/memories/export Export entire brain as JSON
POST /api/agent/memories/import Import brain from JSON
GET /api/agent/memories/{id} Get a single memory
POST /api/agent/memories Create a new memory
PUT /api/agent/memories/{id} Update a memory
DELETE /api/agent/memories/{id} Delete a memory
POST /api/agent/memories/extract-entities Backfill entity extraction on all memories

Workflows and Scheduler

Method Endpoint Description
GET /api/agent/crons List all scheduled jobs
PUT /api/agent/crons/{name} Update a job's schedule or config
GET /api/agent/crons/upcoming Next-fire times for all jobs
GET /api/agent/crons/groups Jobs grouped by category
GET /api/agent/workflows List all available workflows
GET /api/agent/workflows/{name} Get workflow details and source
POST /api/agent/workflows/{name}/enable Enable a workflow
POST /api/agent/workflows/{name}/disable Disable a workflow
POST /api/agent/workflows/{name}/run Trigger a workflow immediately
GET /api/agent/scheduler/status Scheduler health and job count
POST /api/agent/scheduler/reload Reload all jobs from disk

Approval Queue

Method Endpoint Description
GET /api/agent/approvals List items (filter by status/category)
GET /api/agent/approvals/summary Pending count and category breakdown
GET /api/agent/approvals/{id} Get a single approval item
POST /api/agent/approvals Create an approval request
POST /api/agent/approvals/{id}/approve Approve an item
POST /api/agent/approvals/{id}/reject Reject an item

Chat

Method Endpoint Description
WebSocket /ws/chat Real-time chat with history replay on connect
GET /api/agent/chat/history Recent chat messages (REST fallback)
POST /api/agent/chat/send Send a chat message (REST fallback)

Agent Mesh

Method Endpoint Description
GET /api/agent/status Agent status and system info
GET /api/agent/activity Activity timeline
POST /api/agent/activity Log an activity event
GET /api/agent/peers List registered peer agents
POST /api/agent/peers Register a new peer
DELETE /api/agent/peers/{name} Remove a peer
POST /api/agent/broker/send Send a message through the broker
GET /api/agent/broker/recv Receive messages from the broker
GET /api/agent/broker/status Broker connectivity status

Webhooks

Method Endpoint Description
GET /webhooks List registered webhooks
POST /webhooks Register a new webhook
PUT /webhooks/{name} Update a webhook configuration
DELETE /webhooks/{name} Remove a webhook
POST /webhooks/{name} Invoke a webhook (external callers)
GET /webhooks/{name}/history Invocation history for a webhook

Search

Method Endpoint Description
GET /api/agent/search?q= Unified search across memories, workflows, and skills

Skills

Method Endpoint Description
GET /api/agent/skills List installed agent skills
GET /api/agent/skills/{name} Get skill details
POST /api/agent/skills/install Install a skill from URL or pasted content
DELETE /api/agent/skills/{name} Remove an installed skill

Configuration

Method Endpoint Description
GET /api/agent/config Get current configuration
PUT /api/agent/config Update configuration
POST /api/agent/config/test-provider Test LLM provider connectivity

Tasks

Method Endpoint Description
GET /api/agent/tasks List all tasks
POST /api/agent/tasks Create a new task
POST /api/agent/tasks/{id}/cancel Cancel a running task
POST /api/agent/tasks/{id}/retry Retry a failed task

Backups

Method Endpoint Description
GET /api/agent/backups List all backups
POST /api/agent/backups Create a new backup
POST /api/agent/backups/{id}/restore Restore from a backup
GET /api/agent/backups/{id}/download Download a backup file
DELETE /api/agent/backups/{id} Delete a backup

Logs and Costs

Method Endpoint Description
GET /api/agent/logs Structured log viewer
GET /api/agent/costs/summary LLM cost summary
GET /api/agent/costs/history Daily cost breakdown

System

Method Endpoint Description
GET /health Health check with scheduler and approval status
GET /health/deep Deep health check -- tests all subsystems
WebSocket /ws/notifications Real-time push notifications for approvals and activity

Docker Deployment

The included docker-compose.yml runs the full stack: app server, frontend, and Redis broker.

# Copy and configure environment
cp env.example .env   # edit with your API keys

# Build and start
docker compose up --build -d

# View logs
docker compose logs -f aibrain

# Stop
docker compose down

The stack exposes:

  • 5173 -- Dashboard UI (nginx serving the built React app)
  • 8001 -- Backend API (FastAPI + uvicorn)
  • 6379 -- Redis broker (inter-agent messaging)

Volumes persist your data across restarts:

  • ./data -- SQLite databases (aibrain.db)
  • ./memory -- Legacy memory database
  • ./config.json -- Your configuration
  • ./workflows -- Workflow scripts

For VPS-only deployment (no frontend, agent-to-agent only):

docker compose -f docker-compose.vps.yml up -d

The Dockerfile uses a multi-stage build: Python 3.11 for the backend, Node 20 for the frontend build, and a final slim image with nginx for production serving. Health checks run every 30 seconds against the /api/health endpoint.


Boss Agent Orchestrator (Pro/Team)

Run multiple AI agent workers in parallel, each in its own isolated Docker container, coordinated by a single boss process on the host. Workers execute tasks independently -- shell commands, Python scripts, or full Playwright browser automation -- and merge their results back into one shared brain. More workers means faster knowledge accumulation.

Boss (host) ──→ Redis ──→ Worker 1 (Docker + Playwright)
                      ──→ Worker 2 (Docker + Playwright)
                      ──→ Worker N
              ←── Session merge ←── All workers feed one brain

Quick Start

docker compose -f docker-compose.boss.yml up -d --scale worker=2

CLI

python -m boss.boss assign "Research competitor pricing"   # assign a task
python -m boss.boss status                                 # check worker status
python -m boss.boss results                                # collect completed results
python -m boss.boss scale 4                                # scale to 4 workers
python -m boss.boss teardown                               # stop all workers

Worker Types

Type Description
Shell Execute shell commands in an isolated container
Python Run Python scripts with full library access
Playwright Browser automation -- login flows, scraping, form filling, screenshots

Features

  • Communication -- Workers report progress, ask questions, and broadcast to peers through Redis channels
  • Budget guards -- Token limits and iteration caps prevent runaway costs per task
  • Session merge -- Results from all workers deduplicate and merge into the host brain, ranked by score
  • Evolution bridge -- Worker outcomes feed the Evolution Engine's skill learning loop
  • Checkpoint and replay -- Tasks checkpoint progress; a crashed worker resumes from its last checkpoint
  • Model routing -- Complex reasoning routes to Opus, standard tasks to Sonnet, monitoring to Haiku

Pricing

Tier Workers Scope
Free 1 instance Single machine
Pro Up to 3 workers Single machine
Team Up to 10 workers Cross-machine coordination

MCP Server -- Connect Any AI Agent

AIBrain includes an MCP (Model Context Protocol) server that gives any compatible AI agent persistent memory with selective routing. Connect Claude Code, Cursor, Windsurf, or any MCP client.

Performance: #1 on LongMemEval benchmark (Ra@5=0.789, NDCG@5=0.796) -- beats Contriever (110M params) and Stella V5 (1.5B params) with a 22MB model + rule-based routing.

Setup

Add to your .claude.json or .mcp.json:

{
  "mcpServers": {
    "aibrain-memory": {
      "command": "aibrain",
      "args": ["mcp"]
    }
  }
}

aibrain mcp starts the MCP server in stdio mode -- the correct transport for Claude Code, Cursor, Windsurf, and any MCP client that spawns a subprocess. Requires pip install aibrain[mcp].

Three Modes

Mode Config What Ships Performance
No ML AIBRAIN_EMBEDDING_MODEL=none FTS5 only, 0 deps NDCG 0.692
Default (no config needed) MiniLM 22MB, 384-dim Ra@5 0.789, NDCG 0.796
bge-base AIBRAIN_EMBEDDING_MODEL=BAAI/bge-base-en-v1.5 110MB, 768-dim Ra@5 0.791, NDCG 0.812
Custom AIBRAIN_EMBEDDING_MODEL=your/model Any sentence-transformer Routing adapts

Tools Provided

  • memory_store -- Store a memory (auto-enriched for better retrieval)
  • memory_search -- Search with selective routing (auto-detects query type, or pass search_type hint)
  • memory_recall -- Recall top memories by importance

Dependencies

pip install mcp sentence-transformers sqlite-vec

All optional -- the server gracefully degrades. No embeddings? FTS5 only. No enricher? Raw content. No sqlite-vec? Skip vector search.


Development Setup

The fastest way to start both backend and frontend:

# Windows
start.bat

# Unix/Mac
./start.sh

# Or using Make
make dev

To run manually without the launchers:

# Backend
cd backend
pip install -r requirements.txt
python -m uvicorn main:app --host 0.0.0.0 --port 8001 --reload

# Dashboard (separate terminal) — dashboard/ is the active Next.js app
cd dashboard
npm install
npm run dev

Other Make targets: make install, make build, make docker, make clean.

The dashboard dev server runs on port 3000. (frontend/ is a legacy Vite app — use dashboard/ for all active development.)

Note: dashboard/ is the active Next.js app (port 3000). The frontend/ directory contains a legacy Vite prototype that is no longer maintained.


Connect Your Agent

AIBrain works with any AI agent -- Claude, GPT, Gemini, Ollama, or your own framework.

Option 1: Just tell your agent

Paste this to any AI agent (Claude Code, ChatGPT, etc.):

You have access to AIBrain at http://localhost:8001
Store memories: POST /api/agent/memories  Body: {"name": "...", "content": "...", "type": "reference", "tags": ""}
Search memories: GET /api/agent/memories/search?q=your+query
Send chat: POST /api/agent/chat/send  Body: {"content": "message", "role": "agent"}
Dashboard: http://localhost:3000

Option 2: Python SDK

from aibrain_sdk import AIBrain

agent = AIBrain("http://localhost:8001")
agent.remember("user prefers dark mode", tags=["preference"])
results = agent.recall("dark mode")
agent.say("Task complete.")
approval = agent.request_approval("Deploy to production")

Option 3: Example agents

# Claude
python examples/claude_agent.py

# OpenAI GPT
python examples/openai_agent.py

# Local LLM (Ollama -- no API key needed)
python examples/ollama_agent.py

The SDK uses only Python stdlib (urllib) -- zero dependencies. See aibrain_sdk.py for the full API.


Remote Access

Open http://<your-machine-ip>:3000 from any browser on your network (or port 5173 if running via Docker). The responsive design adapts to any screen size. For remote access over the internet, use an SSH tunnel or put behind a reverse proxy with auth.


Brain Export and Import

Transfer your agent's entire memory to another machine:

# Export
curl http://localhost:8001/api/agent/memories/export > brain.json

# Import on new machine
curl -X POST http://localhost:8001/api/agent/memories/import \
  -H "Content-Type: application/json" \
  -d @brain.json

Duplicates are automatically skipped during import.


Project Structure

aibrain/
├── aibrain_sdk.py            # Python SDK — connect any agent
├── setup_wizard.py           # Interactive first-run setup
├── docker-compose.yml        # One-command Docker deploy
├── Dockerfile                # Multi-stage production build
├── Makefile                  # dev, install, build, docker, clean
├── start.sh                  # Unix/Mac launcher
├── start.bat                 # Windows launcher
├── config.json.example       # Configuration template
├── scheduled_jobs.json       # Workflow schedules
├── backend/
│   ├── main.py               # FastAPI server + scheduler + chat + WS
│   ├── aibrain_api.py        # Memory, cron, workflow, skill, search APIs
│   ├── chat_responder.py     # Multi-provider LLM chat (Claude/GPT/Ollama/CLI)
│   ├── llm_router.py         # litellm-backed unified model routing
│   ├── entity_extractor.py   # Pattern-based NER for knowledge graph
│   ├── trace_logger.py       # LLM call tracing with cost tracking
│   ├── action_executor.py    # Approved action execution engine
│   ├── workflow_generator.py # Natural language to workflow script
│   ├── webhooks.py           # Webhook registration and invocation
│   ├── scheduler.py          # APScheduler cron engine
│   ├── approval_queue.py     # Human-in-the-loop approval system
│   └── requirements.txt      # Python dependencies
├── dashboard/               # Active Next.js app (port 3000) — use this for development
│   ├── src/App.jsx           # Router + 28-component navigation
│   └── src/components/
│       ├── HomeDashboard.jsx      # System overview with sparklines
│       ├── MemoryDashboard.jsx    # Memory search, browse, edit
│       ├── MemoryGraph.jsx        # Force-directed knowledge graph
│       ├── CronDashboard.jsx      # Workflow scheduling and control
│       ├── ChatPanel.jsx          # WebSocket chat with commands
│       ├── ApprovalQueue.jsx      # Human-in-the-loop approvals
│       ├── AgentStatus.jsx        # Agent health and peer mesh
│       ├── ActivityTimeline.jsx   # Real-time activity feed (WS)
│       ├── WorkflowBuilder.jsx    # Visual drag-and-drop workflow editor
│       ├── WorkflowLibrary.jsx    # Content pipeline and library
│       ├── WebhooksDashboard.jsx  # Webhook management
│       ├── CostDashboard.jsx      # LLM cost tracking and budgets
│       ├── MCPHub.jsx             # MCP server management
│       ├── SkillMarketplace.jsx   # Skill browse, install, delete
│       ├── Settings.jsx           # Config, provider testing, health
│       ├── SetupWizard.jsx        # First-run guided setup
│       ├── CommandPalette.jsx     # Global Ctrl+K search overlay
│       ├── NotificationCenter.jsx # Real-time notification panel
│       ├── KeyboardShortcuts.jsx  # Shortcut overlay (press ?)
│       ├── SystemLogs.jsx         # Terminal-style log viewer
│       ├── ApiPlayground.jsx      # Interactive API tester
│       ├── BackupManager.jsx      # Memory backup/restore
│       ├── TaskRunner.jsx         # Task queue with progress
│       ├── Integrations.jsx       # Service connection hub
│       ├── OnboardingTour.jsx     # First-run walkthrough
│       ├── ErrorBoundary.jsx      # React error boundary
│       ├── MobileLayout.jsx       # Mobile-optimized layout
│       ├── MobilePairing.jsx      # QR code pairing for mobile
│       └── Toast.jsx              # Toast notification component
├── examples/
│   ├── basic_agent.py        # Minimal agent example
│   ├── claude_agent.py       # Claude-powered agent
│   ├── openai_agent.py       # GPT-powered agent
│   └── ollama_agent.py       # Local LLM agent (no API key)
├── workflows/                # Self-contained workflow scripts
└── memory/
    └── memory.db             # SQLite + FTS5 database

Profile System

Manage multiple brain configurations for different use cases:

aibrain profile list                              # list all profiles with mode and memory count
aibrain profile create research --mode isolated   # create with dedicated brain DB
aibrain profile create team --mode hive           # create with shared brain DB
aibrain profile info research                     # show profile details
aibrain profile delete research --confirm         # unregister a profile

Two modes:

Mode Description
Hive Shared brain DB across profiles. Use when spreading work across multiple agent instances that should share memories.
Isolated Dedicated brain DB per profile (aibrain_<name>.db). Use for training specialist brains or marketplace packaging.

Hive profiles share the primary brain, skills, agents, and rules via filesystem junctions. Isolated profiles get their own database at the data directory level.


Interactive Settings

A single menu to configure every tunable in AIBrain:

aibrain settings    # opens the master settings menu

Nine sub-menus, each showing live status:

Sub-menu What it controls
Compression Per-type token compression toggles (8 categories)
Learning Signal detection thresholds and minimum message length
Model routing LLM provider selection and task-type routing rules
Workflows Enable/disable workflows with live count
Setup Reconfigure individual setup wizard sections
Brain sync Push/pull direction and table selection
Evolution Auto-evolution toggle and interval
Agents Company agent roster management
Schedule Cron job configuration

Arrow keys or number keys to navigate, Enter to open a sub-menu, Q or Escape to go back.


Brain Sync

Full brain portability across machines and agents. Export covers 23 tables -- not just memories, but companies, agents, skills, workflows, evolution history, entities, approvals, permissions, and schedules:

aibrain brain export                   # export full brain to JSON
aibrain brain export --output my.json  # export to specific file
aibrain brain import brain.json        # import another brain (deduplicates automatically)
aibrain brain merge /path/or/git-url   # merge from git repo or local path
aibrain brain stats                    # show table counts and brain size

Credentials are automatically scrubbed during export. The sync format is JSON, safe to commit to a private repo for backup or transfer between machines.


Token Compression

aibrain-compress reduces CLI output before it reaches your agent, saving 40-99% of tokens depending on command type. Pure Python, zero telemetry, zero network calls.

# Wrap mode -- run and compress in one call
aibrain-compress git status
aibrain-compress pytest
aibrain-compress cargo build

# Pipe mode -- used by the shell hook
git diff | aibrain-compress -- git diff

# Install shell hook (auto-compresses git, pytest, cargo, etc.)
aibrain-compress init

# Manage per-type toggles
aibrain settings    # then select Compression

Eight filter categories:

Category Commands Typical savings
Git status, diff, log, push, pull, fetch, stash 50-87%
Test pytest, vitest, jest, cargo test 90-99%
Build tsc, cargo build, eslint, prettier, next build 40-80%
Docker docker build, docker compose 70%+
Pip pip install, pip list 70%+
Env env, printenv 60%+
JSON cat on JSON files 60%+
Traceback Python tracebacks 50%+

Each category can be toggled independently via aibrain settings or environment variables (AIBRAIN_COMPRESS_DISABLE_GIT, etc.).

Python API:

from aibrain.tools.compress import compress
compressed = compress(command=['git', 'status'], output=raw_output)

Brain Fitness

A single objective score (0.0--1.0) that measures how well your brain is performing:

aibrain fitness                # current score with trend
aibrain fitness --window 60    # use 60-day window instead of default 30
aibrain fitness --trend        # show score progression over time

The fitness score combines execution success rate, quality scores, and learning velocity. Scores are interpreted as: 0.8+ excellent, 0.6+ good, 0.4+ developing, below 0.4 needs attention.


Product Metrics

Operationalize your agent's five core claims with measured data from the live database:

aibrain metrics    # print the full product metrics dashboard

Tracks: consolidation rate (decreasing correction rate over time — the brain compounds as CLS slow-learning extracts lessons from fast-episodic traces), memory growth, workflow execution health, consolidation cycle effectiveness, and retrieval quality. Each metric shows actual measured values, not estimates.


State Predictor

Predict which workflow will run next based on execution history patterns:

aibrain next       # predict top 3 most likely next workflows
aibrain next 5     # predict top 5

Uses frequency-based transition modeling with time-of-day and day-of-week weighting. No ML -- just counted transitions weighted by recency. Each prediction includes a confidence score and reasoning.


Correction Tracking

Track whether your brain is learning per domain by analyzing correction and feedback patterns:

aibrain corrections              # show correction trends across all domains
aibrain corrections --days 90    # use 90-day window

Classifies corrections by domain (security, git, deployment, config, testing, data, general), buckets into 7-day windows, and computes trend direction. A decreasing correction rate means the brain is learning.


Interactive Approvals

An interactive terminal menu for reviewing and acting on queued approval requests:

aibrain approvals    # launch interactive approval menu

Items are grouped by category. Keyboard controls:

Key Action
Space Toggle focused item
S Select all in category
A Approve all checked
R Reject all checked
D Detail view
G Next category
P Next page (if >15 items)
Up/Down Navigate
1-9 Quick toggle
Q Quit

Marketing Stats

Track download metrics and marketing performance:

aibrain marketing stats              # interactive dashboard
aibrain marketing stats --summary    # plain text summary (non-interactive)
aibrain marketing stats --sync       # refresh data from sources

Pre-Check

A binary pass/fail gate to run before any public exposure (PyPI publish, blog post, open-source push):

aibrain pre-check                   # scan current repo
aibrain pre-check --path /repo      # scan a specific repo

Checks for: leaked credentials in public-facing files, internal tool names that should not appear in public content, missing license files, and other release-readiness criteria.


Doctor

System health scanner that finds and fixes common problems:

aibrain doctor    # run full diagnostic

Scans for shadow databases (duplicate DBs created by different processes), rescues stranded memories from shadow DBs into the primary, fixes MCP server configuration, validates filesystem junctions, and runs enforcement constraint checks. Safe to run repeatedly -- it reports what it finds and fixes.


CLI Reference

Complete list of aibrain commands:

Setup and System

Command Description
aibrain setup First-time setup wizard (--auto for no prompts, --reconfigure to update)
aibrain init Initialize the brain database
aibrain start Restore stack, start services, install hooks
aibrain status Show current stack state (crons, hooks, services)
aibrain serve Start API backend + dashboard UI + open browser
aibrain server Start FastAPI server only (no dashboard)
aibrain update Safe self-update (backup, pull, validate, auto-config)
aibrain version Print installed version
aibrain doctor Scan for shadow DBs, rescue memories, fix MCP config
aibrain health Brain health report (memories, quality, tier usage)
aibrain fitness Brain fitness objective score and trend
aibrain pre-check Pre-visibility checklist before public release
aibrain settings Interactive master settings menu (9 sub-menus)
aibrain models Show available local + remote models with routing

Memory and Brain

Command Description
aibrain summary Dashboard statistics
aibrain search "query" Cross-table full-text search
aibrain browse Open memory browser in default browser
aibrain forget "topic" Preview deletions (add --confirm to execute)
aibrain brain export Export full brain to JSON (23 tables, credentials scrubbed)
aibrain brain import <file> Import brain from JSON (auto-deduplicates)
aibrain brain merge <source> Merge from git URL or local path
aibrain brain stats Brain table counts and size
aibrain import <file> Import ChatGPT or Claude.ai conversation export (local-first, no API key required)
aibrain dream Run nightly consolidation cycle (compress, reinforce, prune)
aibrain chat Interactive conversational mode with the brain

Workflows and Skills

Command Description
aibrain workflows list List all workflows with status
aibrain workflows enable <n> Enable a workflow (auto-enables dependencies)
aibrain workflows disable <n> Disable a workflow
aibrain workflows enable --recommended Enable starter set
aibrain workflows sync Create OS scheduled tasks (--dry-run to preview)
aibrain workflows run <n> Run a workflow immediately
aibrain workflows deps Show dependency map
aibrain skills Show skill inventory with episode counts and trends
aibrain packs Browse available brain packs (domain bundles)
aibrain packs activate <n> Activate a pack and its workflows

Profiles and Configuration

Command Description
aibrain profile list List all profiles (hive/isolated) with memory counts
aibrain profile create <name> Create a profile (--mode hive or isolated)
aibrain profile info <name> Show profile details
aibrain profile delete <name> Unregister a profile (requires --confirm)
aibrain config get <key> Read a config value
aibrain config set <key> <value> Write a config value
aibrain create <name> Scaffold a new AIBrain project

Analytics and Diagnostics

Command Description
aibrain metrics Product metrics dashboard (5 measured claims)
aibrain corrections Correction trends per domain (learning rate)
aibrain next [N] Predict next N most likely workflows (default 3)
aibrain audit [N] Show last N execution audit entries (default 20)
aibrain marketing stats Download metrics and marketing performance

Agent Teams and Companies

Command Description
aibrain company create <name> Create a company (agent org structure)
aibrain company list List companies
aibrain agent list --company <id> List agents in a company
aibrain agent hire <name> --company <id> Hire a new agent
aibrain task create <title> --company <id> Create a task
aibrain task list --company <id> List tasks
aibrain approvals Interactive approval queue
aibrain inbox --company <id> Show pending approvals
aibrain approve <id> Approve a queued item (--note for comment)
aibrain reject <id> Reject a queued item (--note for comment)

Brain History and Mesh

Command Description
aibrain history snapshot "msg" Snapshot current brain state
aibrain history list List all snapshots
aibrain history rollback <id> Roll back to a snapshot
aibrain history branch <name> Branch brain for experiments
aibrain history diff <a> [b] Compare brain versions
aibrain mesh status Show mesh agent status
aibrain mesh register <n> <db> Register agent in mesh
aibrain mesh merge Merge all agents (consensus)
aibrain mesh distribute Push merged brain to all agents

Stack Management

Command Description
aibrain stack save [name] Save named stack snapshot
aibrain stack restore [name] Restore from snapshot
aibrain stack rollback [N] Roll back N steps (default 1, keeps last 3)
aibrain stack set-default Save current state as default
aibrain stack list List saved snapshots

Training and Testing

Command Description
aibrain train <workflow> -n N Training loop with feedback (default 3 iterations)
aibrain test -n N [workflows...] Automated quality evaluation (--save-baseline)

Marketplace

Command Description
aibrain marketplace package --name "Brain" Package brain for sale
aibrain marketplace validate <file> Validate a .brain file
aibrain marketplace publish <file> Publish to marketplace
aibrain marketplace search "query" Search marketplace
aibrain marketplace install <id> Install a brain
aibrain marketplace revenue Revenue dashboard

Token Compression

Command Description
aibrain-compress <command> Run a command with compressed output
aibrain-compress init Install shell hook (auto-compresses git, pytest, etc.)
aibrain-compress uninstall Remove shell hook
aibrain-compress config show Show effective compression config

Other

Command Description
aibrain license Show license status and tier
aibrain license activate <key> Activate a license key
aibrain license tiers Show pricing tiers
aibrain mcp Start MCP server (stdio mode)
aibrain tools List available built-in tools
aibrain encrypt Encryption utilities
aibrain attack search "query" Search ATT&CK techniques, malware, tools
aibrain events Query the event bus
aibrain scale stats Brain scaling health report
aibrain scale compact Archive stale memories

Account Management

myaibrain.org ships with a full user account system on top of the AIBrain backend: email+password signup, optional OAuth through GitHub / Google / Microsoft, optional TOTP 2FA, and an account dashboard showing subscription tier, license key, and Stripe portal access.

Model

  • One account per email. Users can add multiple login methods to the same account: password, GitHub, Google, Microsoft.
  • Any method lands at the same dashboard with the same license key and subscription state.
  • TOTP 2FA is optional and works regardless of login method. Codes are RFC 6238 (SHA-1, 30s window, 6 digits) with ±1 step tolerance for clock skew — compatible with Google Authenticator, Authy, 1Password.
  • Passwords are bcrypt cost 12. Sessions are 32-byte random tokens in HttpOnly Secure SameSite=Lax cookies, 30-day sliding expiry. Failed logins are rate-limited to 5 per email per 15 minutes.

Endpoints

All under the /auth/* prefix on the backend:

Method Path What it does
POST /auth/signup email + password, bcrypt, returns session cookie
POST /auth/login email + password (+ optional TOTP), returns session cookie
POST /auth/logout deletes the current session
POST /auth/password/reset/request sends a reset link via SMTP (15-min single-use token)
POST /auth/password/reset/confirm accepts token, sets a new password, invalidates existing sessions
GET /auth/verify-email?token=... marks the email as verified
GET /auth/oauth/providers returns which OAuth providers are configured (frontend hides disabled buttons)
GET /auth/oauth/{provider}/start 302 to the provider with PKCE + state + nonce
GET /auth/oauth/{provider}/callback completes the flow, creates or links the account, issues a session
POST /auth/oauth/{provider}/link initiates a link flow for the logged-in user (returns authorize URL)
POST /auth/2fa/setup generates a TOTP secret + otpauth:// URI, requires password reauth
POST /auth/2fa/verify verifies the first code, flips totp_enabled on
POST /auth/2fa/disable disables TOTP, requires password reauth
GET /auth/me current user, linked methods, 2FA status, subscription tier + masked license key

OAuth setup for self-hosting

OAuth providers are enabled only when both the client ID and client secret environment variables are set. Missing variables cause the button to be hidden; nothing errors.

# GitHub — register at https://github.com/settings/developers
export GITHUB_OAUTH_CLIENT_ID=...
export GITHUB_OAUTH_CLIENT_SECRET=...

# Google — console.cloud.google.com -> APIs & Services -> Credentials
export GOOGLE_OAUTH_CLIENT_ID=...
export GOOGLE_OAUTH_CLIENT_SECRET=...

# Microsoft — portal.azure.com -> App registrations
export MICROSOFT_OAUTH_CLIENT_ID=...
export MICROSOFT_OAUTH_CLIENT_SECRET=...

# Where OAuth callbacks come back to (used to build redirect_uri)
export AIBRAIN_BASE_URL=https://api.myaibrain.org

# Where the frontend lives (used for post-login redirects and email links)
export MYAIBRAIN_SITE_URL=https://myaibrain.org

For each provider, register the callback URL as {AIBRAIN_BASE_URL}/auth/oauth/{provider}/callback.

SMTP for email verification and password reset

export SMTP_HOST=smtp.gmail.com
export SMTP_PORT=587
export SMTP_USER=you@example.com
export SMTP_PASSWORD=an-app-password
export SMTP_FROM=noreply@myaibrain.org

When SMTP is not configured, signup and reset requests still succeed — the email simply isn't sent, a warning is logged, and the user can retry once SMTP is available.

Frontend pages

The sindecker/myaibrain-site repo has matching static pages:

  • /signup — email/password + OAuth buttons
  • /login — email/password + OAuth + 2FA challenge
  • /account — dashboard showing email, linked methods, 2FA status, tier, masked license key, valid_until, buttons for Stripe portal / cancel / 2FA / link / change password
  • /password-reset and /password-reset/confirm
  • /verify-email
  • /2fa/setup

All pages use the existing dark theme and client-side validation. Server-side validation is always the source of truth.

Database tables

All live in aibrain.db and are created by _migrate_schema so existing installs pick them up on next start. The tables: users (with password_hash, totp_secret, totp_enabled, email_verified, last_login_at added as new columns if the RBAC schema was pre-existing), oauth_links (composite PK on user_id+provider), sessions, password_reset_tokens, email_verification_tokens, license_state (per-user subscription state written by the Stripe webhook), login_attempts (for rate limiting), and oauth_states (short-lived PKCE+state+nonce storage).


Keyboard Shortcuts

Shortcut Action
Ctrl+K Open command palette
? Show keyboard shortcuts overlay
N Toggle notification center
>query Search memories, workflows, and skills from palette
Escape Close any modal or palette
Arrow keys Navigate command palette results
Enter Select highlighted palette result

Benchmarks

AIBrain publishes honest benchmark results using a two-tier metric system.

The core distinction: retrieval recall@5 and end-to-end QA accuracy are not the same thing. High retrieval recall does not imply high answer accuracy. Many published memory benchmarks report only retrieval recall while presenting it as end-to-end performance. AIBrain reports both.

Metric AIBrain v1.5.3 What it measures
Retrieval Recall@5 90.7% Did the correct memory appear in top-5?
End-to-End QA Accuracy 31.5% Did the system produce the correct answer?

Benchmark: LongMemEval-M, 54-question stratified sample (2026-04-14). The 59-point gap reveals where the system actually fails: LLM answer extraction, not retrieval.

Leaderboard and submission spec:

To submit your memory system's results, open a pull request following the spec. The leaderboard ranks by end-to-end QA accuracy. Retrieval-only submissions are listed separately.


License

Proprietary. All rights reserved. See LICENSE for details.

For support, visit myaibrain.org.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aibrain-1.5.23.tar.gz (2.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aibrain-1.5.23-py3-none-any.whl (2.0 MB view details)

Uploaded Python 3

File details

Details for the file aibrain-1.5.23.tar.gz.

File metadata

  • Download URL: aibrain-1.5.23.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for aibrain-1.5.23.tar.gz
Algorithm Hash digest
SHA256 78b437fd6879c3f34e8f407f88da6f7a360e503b70fb201697fe0506bf0013cb
MD5 d6a2ec7e49276a3a579cab9a0b8522bf
BLAKE2b-256 d490ac4645928e454297d0218a6e1aca01043a6b9ac697d2f039c3b796dcdc78

See more details on using hashes here.

File details

Details for the file aibrain-1.5.23-py3-none-any.whl.

File metadata

  • Download URL: aibrain-1.5.23-py3-none-any.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for aibrain-1.5.23-py3-none-any.whl
Algorithm Hash digest
SHA256 5df023d08e8d08bbaabdd1562cd6d0f930185f3e8c490bb1276c946f0622af21
MD5 cebc9dd72cf5d79647e2246d14d12e7e
BLAKE2b-256 10703bde11ef10007edfdb0367ecb9705e389843c560de73a86ecced5fdbfbe8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page