Skip to main content

A cognitive memory system for AI agents — 49 MCP tools for persistent memory, causal reasoning, and predictive intelligence

Project description

Cerebro
The Brain Behind the Code

A cognitive memory system that plugs into Claude Code (or any MCP client) and gives your AI persistent memory, learning, causal reasoning, and predictive intelligence — across every session, every project, forever.

Neural Network

49 MCP tools. 3-tier memory. Local-first. Install in under 3 minutes.

License: AGPL-3.0 Python MCP Tools PyPI Memory Tiers Cerebro Pro


Why Cerebro?

:brain: Remember Everything

Your AI gets total recall. Conversations, facts, and context carry across sessions — nothing is ever forgotten.

  • Episodic memory for events, semantic for facts, working for active reasoning
  • Hybrid semantic + keyword search across all memories
  • Session continuity — pick up exactly where you left off

:gear: Learn and Adapt

Your AI gets smarter with every interaction. Solutions, failures, and patterns are tracked automatically.

  • Auto-detects solutions, failures, and antipatterns
  • Patterns auto-promote to trusted knowledge after 3+ confirmations
  • Tracks past mistakes and avoids repeating them

:crystal_ball: Reason and Predict

Go beyond retrieval into genuine reasoning. Cerebro builds causal models and catches problems before they happen.

  • Causal models with "what-if" simulation
  • Predictive failure anticipation from historical patterns
  • Hallucination detection and confidence scoring

Quick Start

Prerequisites

1. Install

pip install cerebro-ai

For semantic search (recommended — uses FAISS + sentence-transformers):

pip install cerebro-ai[embeddings]

Without [embeddings], Cerebro falls back to keyword-only search. Still functional, but semantic search is significantly more powerful.

2. Initialize

cerebro init

This creates your local memory store at ~/.cerebro/data.

3. Add to Claude Code

Add this to your MCP config (~/.claude/mcp.json):

{
  "mcpServers": {
    "cerebro": {
      "command": "cerebro",
      "args": ["serve"]
    }
  }
}

4. Verify

Restart Claude Code and run /mcp — you should see 49 Cerebro tools. Start a conversation and Cerebro will automatically begin building your memory.

Health Check

cerebro doctor

The Full Experience

The MCP tools give your AI persistent memory. Cerebro Pro wraps it in a complete cognitive desktop — where your AI thinks, acts, and evolves autonomously.


What You Get

These are the tools you'll use daily. Cerebro has 49 total — here are the highlights:

Tool What it does
search Find anything in memory — hybrid semantic + keyword search across all conversations, facts, and learnings
record_learning Save a solution, failure, or antipattern. Next time you hit the same problem, Cerebro surfaces it
get_corrections Check what your AI got wrong before — so it doesn't repeat the same mistakes
check_session_continuation Pick up where you left off. Detects in-progress work and restores full context
working_memory Active reasoning state: hypotheses, evidence chains, scratch notes that persist across compactions
causal Build cause-effect models. Ask "what causes X?" or simulate "what if I do Y?"
predict Anticipate failures before they happen based on patterns from your history
get_user_profile Your AI knows your preferences, projects, environment, and goals — no re-explaining

See all 49 tools below or browse the full MCP Tools Reference.


Cerebro Pipeline


All 49 MCP Tools

Cerebro exposes 49 tools through the Model Context Protocol, organized into 10 categories. Every tool works with any MCP-compatible AI client.

Memory Core (5 tools) — Store, search, and retrieve memories
Tool Description
save_conversation_ultimate Save conversations with comprehensive extraction of facts, entities, actions, and code snippets
search Hybrid semantic + keyword search across all memories (recommended default)
search_knowledge_base Search the central knowledge base for facts, learnings, and discoveries
search_by_device Filter memory searches by device origin (e.g., only laptop conversations)
get_chunk Retrieve specific memory chunks by ID for context injection
Knowledge Graph (5 tools) — Entities, timelines, and user context
Tool Description
get_entity_info Get information about any entity (tool, person, server, etc.) with conversation history
get_timeline Chronological timeline of actions and decisions for a given month
find_file_paths Find all file paths mentioned in conversations with purpose and context
get_user_context Comprehensive user context: goals, preferences, technical environment
get_user_profile Full personal profile: identity, relationships, projects, preferences
3-Tier Memory (6 tools) — Episodic, semantic, and working memory
Tool Description
memory_type: query_episodic Query event memories by date, actor, or emotional state
memory_type: query_semantic Query general facts by domain or keyword
memory_type: save_episodic Save event memories with emotional state and outcome
memory_type: save_semantic Save factual knowledge with domain classification
working_memory Active reasoning state: hypotheses, evidence chains, scratch notes
consolidate Cluster episodes, create abstractions, strengthen connections, prune redundancies
Reasoning (5 tools) — Causal models, prediction, and self-awareness
Tool Description
reason Active reasoning over memories: analyze, find insights, validate hypotheses
causal Causal models: add cause-effect links, find causes/effects, simulate "what-if" interventions
predict Predictive simulation: anticipate failures, check patterns, suggest preventive actions
self_model Continuous self-modeling: confidence tracking, uncertainty, hallucination checks
analyze Pattern analysis, knowledge gap detection, skill development tracking
Learning (4 tools) — Solutions, corrections, and antipatterns
Tool Description
record_learning Record solutions, failures, or antipatterns with tags and context
find_learning Search for proven solutions or known antipatterns by problem description
analyze_conversation_learnings Extract learnings from a past conversation automatically
get_corrections Retrieve corrections Claude learned from the user to avoid repeating mistakes
Session Continuity (6 tools) — Never lose your place
Tool Description
check_session_continuation Check for recent work-in-progress to continue
get_continuation_context Get full context for resuming a previous session
update_active_work Track current project state for session handoff
session_handoff Save and restore working memory across sessions
working_memory: export/import Export active reasoning state for handoff, import to restore
session Session info: thread history, active sessions, summaries, continuation detection
User Intelligence (5 tools) — Preferences, goals, and proactive suggestions
Tool Description
preferences Track and evolve user preferences with confidence weighting and contradiction detection
personality Personality evolution: traits, consistency checks, feedback-driven adaptation
goals Detect, track, and reason about user goals with blocker identification
suggest_questions Generate questions to fill knowledge gaps in the user profile
get_suggestions Proactive context-aware suggestions based on current situation and history
Projects (2 tools) — Project tracking and version evolution
Tool Description
projects Project lifecycle: state, active list, stale detection, auto-update, activity summaries
project_evolution Version tracking: record releases, view timeline, manage superseded versions
Quality (5 tools) — Maintenance, health, and self-improvement
Tool Description
rebuild_vector_index Rebuild the FAISS vector search index after bulk updates
decay Storage decay management: run decay cycles, preview, manage golden (protected) items
self_report Self-improvement reports: performance metrics, before/after tracking
system_health_check Health check across all components: storage, embeddings, indexes, database
quality Memory quality: deduplication, merge, fact linking, quality scoring
Meta (6 tools) — Retrieval optimization, privacy, and exploration
Tool Description
meta_learn Retrieval strategy optimization: A/B testing, parameter tuning, performance tracking
memory_type Query and manage episodic vs semantic memory types with stats and migration
privacy Secret detection, redaction statistics, sensitive conversation identification
device Device registration and identification for multi-device memory isolation
branch Exploration branches: create divergent reasoning paths, mark chosen/abandoned
conversation Conversation management: tagging, notes, relevance scoring

How It Works

graph LR
  A[Your AI Client] <-->|MCP Protocol| B[Cerebro Server]
  B --> C[FAISS Vector Search]
  B --> D[Knowledge Base]
  B --> E[File Storage]

All data stays on your machine. No cloud, no API keys, no telemetry.


Free vs Pro

Capability Free (This Repo) Pro (cerebro.life)
Memory 49-tool MCP server. Full cognitive architecture. Everything in Free + dashboard visualization of your memory graph and health stats.
Interface Claude Code CLI or any MCP client. Native desktop app with Mind Chat, 3D neural constellation, real-time activity.
Agents Single Claude session with persistent memory. Agent swarms — multiple Claudes collaborating on complex tasks autonomously.
Browser Not included. Autonomous browser agents: research, navigate, extract — with live video preview.
Automations Not included. Calendar-driven recurring tasks, scheduled research, automated workflows.
Cognitive Loop Not included. OODA cycle: Observe-Orient-Decide-Act. Your AI thinks and acts continuously.

Configuration

Cerebro works out of the box with zero configuration. All settings are optional and controlled via environment variables:

Variable Default Description
CEREBRO_DATA_DIR ~/.cerebro/data Base directory for all Cerebro data
CEREBRO_EMBEDDING_MODEL all-mpnet-base-v2 Sentence transformer model for semantic search
CEREBRO_EMBEDDING_DIM 768 Embedding vector dimensions
CEREBRO_LOG_LEVEL INFO Logging level
CEREBRO_LLM_URL (none) Optional local LLM endpoint for deeper reasoning
CEREBRO_LLM_MODEL (none) Optional local LLM model name

Set them in your MCP config:

{
  "mcpServers": {
    "cerebro": {
      "command": "cerebro",
      "args": ["serve"],
      "env": {
        "CEREBRO_DATA_DIR": "/path/to/your/data"
      }
    }
  }
}

Contributing

Contributions are welcome — bug fixes, new MCP tools, documentation improvements, or feature ideas.

Please read the Contributing Guide before submitting a pull request. All contributions must be compatible with the AGPL-3.0 license.


License & Attribution

Copyright (C) 2026 Michael Lopez (Professor-Low)

Cerebro is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
See LICENSE for details.

What AGPL-3.0 means: If you use Cerebro's code in your own product — including as a network service — you must release your modified source code under the same license and give proper attribution. This protects the project from being taken proprietary.

Created and maintained by Michael Lopez (Professor-Low)


Get Started · Cerebro Pro · Architecture · Issues

If Cerebro helps you, consider giving it a star — it helps others find the project.

GitHub stars

cerebro.life

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cerebro_ai-1.5.3.tar.gz (456.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cerebro_ai-1.5.3-py3-none-any.whl (492.9 kB view details)

Uploaded Python 3

File details

Details for the file cerebro_ai-1.5.3.tar.gz.

File metadata

  • Download URL: cerebro_ai-1.5.3.tar.gz
  • Upload date:
  • Size: 456.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for cerebro_ai-1.5.3.tar.gz
Algorithm Hash digest
SHA256 e57ebece1d6f08187da1633ef2ac15a36903e2494c639b170c4ae95db6c49539
MD5 9b75f52db1ab9957b0b61894549a8f0e
BLAKE2b-256 d4a1591f038659dc22268f0e2f64017e00dd77c929de578d97c0cc9d6e793ba0

See more details on using hashes here.

File details

Details for the file cerebro_ai-1.5.3-py3-none-any.whl.

File metadata

  • Download URL: cerebro_ai-1.5.3-py3-none-any.whl
  • Upload date:
  • Size: 492.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for cerebro_ai-1.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a4cff361932d7a37689a7891524444cf0426fbd5bf1393e751cc61a755d1c112
MD5 c10b0e512eb8cb9c861e3956e57f2eed
BLAKE2b-256 44dfc68ea95b43a66613c227dab16edf8e1ebd1012156ecfb86c334e5f17e2a4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page