Skip to main content

A cognitive memory system for AI agents — episodic, semantic, and procedural memory with FTS5 search, vector embeddings, neuromodulation, and MCP server integration.

Project description

brainctl

A cognitive memory system for AI agents. Single SQLite file. No server required.

from brainctl import Brain

brain = Brain()
brain.remember("User prefers dark mode")
brain.search("dark mode")
brain.entity("Alice", "person", observations=["Engineer", "Likes Python"])
brain.relate("Alice", "works_at", "Acme")
brain.log("Deployed v2.0")

MCP Server (Claude Desktop / VS Code)

{
  "mcpServers": {
    "brainctl": {
      "command": "brainctl-mcp"
    }
  }
}

12 tools: memory_add, memory_search, event_add, event_search, entity_create, entity_get, entity_search, entity_observe, entity_relate, decision_add, search, stats

Install

pip install brainctl              # core
pip install brainctl[mcp]         # with MCP server
pip install brainctl[vec]         # with vector search (sqlite-vec)
pip install brainctl[all]         # everything

CLI

# Memories
brainctl memory add "Python 3.12 is the minimum version" -c convention
brainctl memory search "python version"

# Entities (typed knowledge graph)
brainctl entity create "Alice" -t person -o "Engineer; Likes Python; Based in NYC"
brainctl entity get Alice
brainctl entity relate Alice works_at Acme
brainctl entity search "engineer"

# Events
brainctl event add "Deployed v2.0 to production" -t result -p myproject
brainctl event search -q "deploy"

# Cross-table search (memories + events + entities)
brainctl search "deployment"

# Prospective memory (triggers that fire on future queries)
brainctl trigger create "Alice mentions vacation" -k vacation,alice -a "Remind about project deadline"
brainctl trigger check "alice is going on vacation"

# Stats
brainctl stats

What Makes It Different

Feature brainctl mem0 Zep MemGPT
Single file (SQLite)
No server required
MCP server included
Full-text search (FTS5)
Vector search
Entity registry
Knowledge graph
Consolidation engine
Confidence decay
Bayesian scoring
Prospective memory
Write gate (surprise scoring)
Multi-agent support
No LLM calls for memory ops

Architecture

brain.db (single SQLite file)
├── memories        FTS5 full-text + optional vec search
├── events          timestamped logs with importance scoring
├── entities        typed nodes (person, project, tool, concept...)
├── knowledge_edges directed relations between any table rows
├── decisions       recorded with rationale
├── memory_triggers prospective memory (fire on future conditions)
└── 20+ more tables (consolidation, beliefs, policies, epochs...)

Consolidation Engine (hippocampus.py)
├── Confidence decay    — unused memories fade
├── Temporal promotion  — frequently-accessed memories strengthen
├── Dream synthesis     — discover non-obvious connections
├── Hebbian learning    — co-retrieved memories form edges
├── Contradiction detection
└── Compression         — merge redundant memories

Write Gate (W(m))
├── Surprise scoring    — reject redundant memories at the door
├── Worthiness check    — surprise × importance × (1 - redundancy)
└── Force flag          — bypass for explicit writes

Vector Search (Optional)

brainctl works without embeddings. For vector search, install Ollama and sqlite-vec:

pip install brainctl[vec]
# Install Ollama: https://ollama.ai
ollama pull nomic-embed-text
brainctl-embed                    # backfill embeddings
brainctl vsearch "semantic query" # vector similarity search

Docker

docker build -t brainctl .
docker run -v ./data:/data brainctl              # MCP server
docker run -v ./data:/data brainctl brainctl stats  # CLI

Multi-Agent

Every operation accepts --agent / agent_id for attribution:

brainctl -a agent-alpha memory add "learned something" -c lesson
brainctl -a agent-beta entity observe "Alice" "Now leads the team"

Agents share one brain.db. Each write is attributed. Search sees everything.

Token Cost Optimization

brainctl is designed to reduce your model's token usage, not increase it. Without persistent memory, agents waste tokens re-reading files, re-asking questions, and re-discovering their environment every session. brainctl eliminates that — but only if configured well.

Output Formats

Every search command supports --output to control token consumption:

brainctl search "deploy" --output json      # default: pretty JSON (~2200 tokens)
brainctl search "deploy" --output compact   # minified JSON (~1700 tokens, ~24% savings)
brainctl search "deploy" --output oneline   # ID|type|text (~60 tokens, ~97% savings)

For agents that just need facts (not full metadata), --output oneline is the single biggest cost reduction you can make.

Budget Caps

Hard-cap search output at a token limit:

brainctl search "deploy" --budget 500       # trim lowest-ranked results until output fits
brainctl search "deploy" --limit 3          # fewer results = fewer tokens
brainctl search "deploy" --min-salience 0.1 # suppress low-relevance noise

Cost Dashboard

See exactly where tokens are going:

brainctl cost

Shows: format savings comparison, queries/tokens today and last 7 days, top token-consuming agents, and actionable recommendations.

Design Principles for Low-Cost Usage

  1. Query the brain, don't inject it. Don't dump memory into every system prompt. Search when relevant.
  2. Use oneline for routine lookups. Full JSON is for debugging. Agents need facts, not metadata.
  3. Set --budget on automated queries. Cron jobs and heartbeats should cap their own output.
  4. Limit scope. --tables memories skips events/context. --category convention narrows further.
  5. Let salience filtering work. --min-salience 0.1 drops noise that wastes tokens downstream.

Contributing

See CONTRIBUTING.md for development setup, coding guidelines, and PR workflow.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brainctl-0.2.0.tar.gz (149.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brainctl-0.2.0-py3-none-any.whl (147.5 kB view details)

Uploaded Python 3

File details

Details for the file brainctl-0.2.0.tar.gz.

File metadata

  • Download URL: brainctl-0.2.0.tar.gz
  • Upload date:
  • Size: 149.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for brainctl-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4ed6cde7cccde6f9929fd74442fc6444e7531a7244cdc3ba21af6c9346d6b587
MD5 aaed31e4a18505b58aa9b31ba7cd43ca
BLAKE2b-256 20de6b534d2cf5bb79cf0d8cc43d47efbc966e0be1f97a034b5c3e89040ff220

See more details on using hashes here.

File details

Details for the file brainctl-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: brainctl-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 147.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for brainctl-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b30d250954ad5e4ebd126b857616438a137bce344752c648ba4013a9096d69c5
MD5 836c2fe1730b6e40cdc389299778ddcb
BLAKE2b-256 332c73b1dea8f227da45a9d288ddf90eab44cd4690bd3298738166357651bd72

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page