Persistent two-tier memory for AI agents. Short-term markdown + long-term SQLite with semantic search.
Project description
Agent Cerebro
Persistent two-tier memory for AI agents. Battle-tested across 134 sessions with 10 agent roles.
Short-term (markdown files, always loaded) + Long-term (SQLite + OpenAI embeddings, searched on-demand).
Install
pip install agent-cerebro
Zero required dependencies. SQLite is Python stdlib.
Optional semantic search:
pip install agent-cerebro[embeddings]
export OPENAI_API_KEY="sk-..."
Quick Start
CLI
# Initialize
cerebro init
# Store a memory (auto-dedup via cosine similarity >0.92)
cerebro store coder gotchas "kamal app exec spawns new container, use docker exec"
cerebro store social exhausted_stories "blue-green deploy order loss" --tags deploy,sqlite
# Search (semantic + keyword fallback)
cerebro search coder gotchas "kamal file not found"
cerebro search coder gotchas "deploy issue" --tag critical
# List categories
cerebro list coder
# Timeline — chronological view of all memories
cerebro timeline coder
cerebro timeline coder --last 7d
cerebro timeline coder --last 2w --category gotchas
# Export — dump all memories for a role
cerebro export coder --format md > coder_memories.md
cerebro export coder --format json > coder_memories.json
cerebro export coder --format json --category gotchas
# Stats — storage metrics and category breakdown
cerebro stats
cerebro stats coder
# Garbage collection — find and remove near-duplicates
cerebro gc coder --dry-run
cerebro gc coder --apply
cerebro gc coder --threshold 0.85 --category gotchas
# Check health
cerebro check --all
Python API
from agentrecall import MemoryStore, MemorySearch, MemoryTimeline, MemoryExport, MemoryStats, MemoryGC
# Store
store = MemoryStore()
store.store("coder", "gotchas", "kamal spawns new container", tags=["kamal", "docker"])
# Search (with optional tag filter)
search = MemorySearch()
results = search.search("coder", "gotchas", "kamal file not found")
results = search.search("coder", "gotchas", "deploy issue", tag="critical")
# Timeline
timeline = MemoryTimeline()
entries = timeline.timeline("coder", last="7d")
# Export
export = MemoryExport()
markdown = export.export("coder", fmt="md")
json_str = export.export("coder", fmt="json", category="gotchas")
# Stats
stats = MemoryStats()
metrics = stats.stats(role="coder")
# → {total_entries, total_with_embeddings, embedding_coverage_pct, db_size_bytes, ...}
# Garbage collection
gc = MemoryGC()
result = gc.gc("coder", dry_run=True)
# → {found: 3, removed: 0, duplicates: [...]}
result = gc.gc("coder", dry_run=False) # actually delete
How It Works
Two-Tier Design
Short-term (memory/<role>.md) |
Long-term (SQLite + embeddings) |
|---|---|
| Active learnings, mistakes, feedback | Growing lists (exhausted topics, defect patterns) |
| Max 80 lines, pruned regularly | Unlimited entries, never pruned |
| Read in full at session start | Searched on-demand per query |
Semantic Dedup
Every store call embeds the text via OpenAI text-embedding-3-small and checks cosine similarity against all existing entries in the same role/category. Similarity > 0.92 blocks the store (raises DuplicateError).
Without an API key, falls back to exact text matching.
Search
- Embed the query
- Compute cosine similarity against all entries with embeddings
- Return entries above threshold (0.75), sorted by similarity
- If no embedding matches: keyword fallback (>=50% keyword match)
- No API key: keyword-only search
- Optional
--tagfilter narrows results to entries with a specific tag
Garbage Collection
cerebro gc finds near-duplicate entries within each role/category pair:
- With embeddings: cosine similarity >= threshold (default 0.92)
- Without embeddings: exact text match (case-insensitive)
- Older entry (lower ID) is kept; newer duplicate is removed
--dry-run(default) reports without deleting--applyactually removes duplicates
Graceful Degradation
Works fully offline without an OpenAI API key:
- Store: exact text dedup (case-insensitive)
- Search: keyword matching (>=50% of query words must appear)
- GC: exact text match dedup only
Agent Skills
Copy skill/agent-recall/ into your project's skills directory for use with Claude Code, Codex, Cursor, Copilot, Cline, or Goose.
cp -r skill/agent-recall/ .claude/skills/agent-recall/
Configuration
Environment variables:
| Variable | Default | Description |
|---|---|---|
AGENT_CEREBRO_HOME |
~/.agent-cerebro |
Memory storage directory |
OPENAI_API_KEY |
(none) | OpenAI API key for embeddings |
UT_OPENAI_API_KEY |
(none) | Preferred over OPENAI_API_KEY |
CLI Reference
cerebro store <role> <category> "text" [--tags t1,t2] [--db path]
cerebro search <role> <category> "query" [--tag tagname] [--db path]
cerebro list <role> [--db path]
cerebro timeline <role> [--last 7d] [--category cat] [--limit N] [--db path]
cerebro export <role> [--format md|json] [--category cat] [--db path]
cerebro stats [role] [--db path]
cerebro gc <role> [--dry-run] [--apply] [--threshold 0.92] [--category cat] [--db path]
cerebro check [--fix] [--long-term] [--all] [--dir path] [--db path]
cerebro init [--dir path]
cerebro migrate [--dry-run] [--rebuild] [--dir path] [--db path]
agentrecall and agentmemory also work as CLI aliases.
Exit codes: 0 = success/found, 1 = not-found/validation-fail, 2 = input error.
Migration from JSONL
If you have existing JSONL memory files:
cerebro migrate --dir /path/to/memory/
cerebro migrate --rebuild # Re-embed entries missing embeddings
Related Tools
Part of the Ultrathink Agent Suite:
- Agent Architect Kit — Multi-agent starter kit that uses Cerebro for cross-session memory
- Agent Orchestra — Task queue + orchestration CLI for spawning and managing agents
- AgentBrush — Image editing toolkit for AI agents
Built by an AI-run dev shop. Read how →
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_cerebro-0.3.0.tar.gz.
File metadata
- Download URL: agent_cerebro-0.3.0.tar.gz
- Upload date:
- Size: 33.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01fde15b342d4ab026d8858718446346b4c4fed023e33fc419304a4b1ad60fc9
|
|
| MD5 |
193e9ecc7a2ef8ebad635534782694d1
|
|
| BLAKE2b-256 |
cd47dcab72ce067292e73eb2a1308e1235e15db6836f2b8e732e3479db5c9cd4
|
File details
Details for the file agent_cerebro-0.3.0-py3-none-any.whl.
File metadata
- Download URL: agent_cerebro-0.3.0-py3-none-any.whl
- Upload date:
- Size: 27.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ebcb049ef79f93b5fd4339429f114cf2032228d51cf70a563f37506850947f7d
|
|
| MD5 |
bb8b7ac6a039ca71600cd5f169d006be
|
|
| BLAKE2b-256 |
5afb9b10e563c3cb8df8b1526d53767e7f6d7a9c24496f9b986a9a7bdf050a29
|