Skip to main content

Shared memory infrastructure for multi-instance AI agents

Project description

Tribal Memory

Your AI tools don't share a brain. Tribal Memory gives them one.

One memory store, many agents. Teach Claude Code something — Codex already knows it. That's not just persistence — it's cross-agent intelligence.

One Brain, Two Agents — Claude Code stores memories, Codex recalls them
Claude Code stores architecture decisions → Codex recalls them instantly

PyPI License


Why

Every AI coding assistant starts fresh. Claude Code doesn't know what you told Codex. Codex doesn't know what you told Claude. You repeat yourself constantly.

Tribal Memory is a shared memory server that any AI agent can connect to via MCP. Store a memory from one agent, recall it from another. It just works.


Install

pip install tribalmemory    # or: uv tool install tribalmemory

Quick Start

Zero cloud. Zero API keys. Everything runs locally.

pip install tribalmemory
tribalmemory init
tribalmemory serve

That's it. No config editing required.

First run: FastEmbed downloads a ~130MB ONNX model on first use. After that, embeddings are instant and fully offline.

Server runs on http://localhost:18790.

Running as a Service (Optional)

Keep the server running in the background with automatic restarts:

# Install and start the service (systemd on Linux, launchd on macOS)
tribalmemory service install

# Check status
tribalmemory service status

# View logs
tribalmemory service logs

# Stop and remove
tribalmemory service uninstall

User-level services only — no root/sudo required.


Integrations

Tribal Memory connects to AI agents via MCP (Model Context Protocol). Set up one or more of these:

Claude Code (CLI)

# Auto-configure (recommended)
tribalmemory init --claude-code

Or manually — add to ~/.claude.json:

{
  "mcpServers": {
    "tribal-memory": {
      "command": "tribalmemory-mcp"
    }
  }
}

Auto-Capture

By default, Claude Code has the MCP tools available but won't use them unless you ask. Add --auto-capture to make Claude Code proactively store and recall memories:

tribalmemory init --claude-code --auto-capture

This appends instructions to ~/.claude/CLAUDE.md that tell Claude Code to:

  • Auto-recall relevant memories at the start of each conversation
  • Auto-store important decisions, architecture choices, and key facts
  • Use tribal_remember and tribal_recall without being explicitly asked

Without --auto-capture, you can still use memory manually by saying "remember that..." or "what do you know about...".

Now Claude Code has persistent memory across sessions:

You: Remember that the auth service uses JWT with RS256
Claude: ✅ Stored.

--- next session ---

You: How does the auth service work?
Claude: Based on my memory, the auth service uses JWT with RS256...

Claude Desktop

# Auto-configure (recommended — resolves the full binary path automatically)
tribalmemory init --claude-desktop

Claude Desktop doesn't inherit your shell PATH, so the bare command tribalmemory-mcp won't work. The init flag resolves the absolute path and writes it to claude_desktop_config.json for you.

Both Claude Apps

# Configure Claude Code CLI and Claude Desktop together
tribalmemory init --claude-code --claude-desktop

Codex (CLI & Desktop)

The Codex CLI and desktop app share the same config file (~/.codex/config.toml), so one command sets up both:

# Auto-configure (recommended — works for both CLI and desktop app)
tribalmemory init --codex

Or manually — add to ~/.codex/config.toml:

[mcp_servers.tribal-memory]
command = "tribalmemory-mcp"

Note: The init flag resolves the full binary path automatically, so the desktop app finds the command even if it doesn't inherit your shell PATH.

That's it. Codex now shares the same memory store as Claude Code. Memories stored by one are instantly available to the other.

Auto-capture works for Codex too — it writes instructions to ~/.codex/AGENTS.md:

tribalmemory init --codex --auto-capture

Set Up Everything at Once

# Configure all agents + auto-capture + background service
tribalmemory init --claude-code --codex --auto-capture --service

One command: MCP configured for both agents, auto-capture enabled, server running as a service.

OpenClaw

Tribal Memory includes a plugin for OpenClaw:

openclaw plugins install ./extensions/memory-tribal
openclaw config set plugins.slots.memory=memory-tribal

How memories are saved:

  • Automatically — Memories are captured when the agent responds (preferences, decisions, key facts)
  • On demand — Use /remember <thing to remember> for immediate storage
/remember Joe's birthday is March 15
/remember Always use TypeScript for new projects

Cloud Setup (Coming Soon)

A hosted Tribal Memory service for teams — no server management, automatic syncing across machines. Star the repo for updates.


Demo

Run the interactive demo to see Tribal Memory in action:

./demo.sh

See docs/demo-output.md for sample output.


Self-Hosted Setup

Configuration

Generated by tribalmemory init. Lives at ~/.tribal-memory/config.yaml:

instance_id: my-agent

embedding:
  provider: fastembed
  model: BAAI/bge-small-en-v1.5
  dimensions: 384

db:
  provider: lancedb
  path: ~/.tribal-memory/lancedb

server:
  host: 127.0.0.1
  port: 18790

search:
  lazy_spacy: true    # 70x faster ingest (default: true)

Entity Extraction (Optional)

For better recall on personal conversations (finding people, places, dates), install spaCy:

pip install tribalmemory[spacy]
python -m spacy download en_core_web_sm

With lazy spaCy (default), entity extraction is blazing fast:

  • Ingest: Uses fast regex patterns (~2-3 seconds per conversation)
  • Recall: Runs spaCy NER once on your query for accurate entity matching

This gives you the best of both worlds — fast ingestion AND accurate retrieval for personal conversations.

Environment Variables

Variable Description
TRIBAL_MEMORY_CONFIG Path to config file (default: ~/.tribal-memory/config.yaml)
TRIBAL_MEMORY_INSTANCE_ID Override instance ID

Docker

docker compose up -d

Mount a custom config.yaml to change embedding model or dimensions. See docker-compose.yml for all options.


Tribal API

HTTP Endpoints

All endpoints are under the /v1 prefix.

# Store a memory
curl -X POST http://localhost:18790/v1/remember \
  -H "Content-Type: application/json" \
  -d '{"content": "The database uses Postgres 16", "tags": ["infra"]}'

# Batch store (up to 1000 memories)
curl -X POST http://localhost:18790/v1/remember/batch \
  -H "Content-Type: application/json" \
  -d '{"memories": [
    {"content": "Auth uses JWT with RS256"},
    {"content": "Database is Postgres 16", "tags": ["infra"]}
  ]}'

# Search memories (auto-parses dates from query)
curl -X POST http://localhost:18790/v1/recall \
  -H "Content-Type: application/json" \
  -d '{"query": "what did we discuss last week?", "limit": 5}'

# Search with explicit temporal filter
curl -X POST http://localhost:18790/v1/recall \
  -H "Content-Type: application/json" \
  -d '{"query": "database decisions", "after": "2026-01-01", "limit": 5}'

# Health check
curl http://localhost:18790/v1/health

# Get stats
curl http://localhost:18790/v1/stats

MCP Tools

When connected via MCP, your AI gets these tools:

Tool Description
tribal_store Store a new memory with deduplication
tribal_recall Search memories (vector + graph expansion)
tribal_recall_entity Query by entity name with hop traversal
tribal_entity_graph Explore entity relationships
tribal_correct Update/correct an existing memory
tribal_forget Delete a memory
tribal_stats Get memory statistics
tribal_export Export memories to portable JSON
tribal_import Import memories from a bundle
tribal_sessions_ingest Index conversation transcripts

Python API

from tribalmemory.services import create_memory_service

# FastEmbed uses BAAI/bge-small-en-v1.5 (384 dims) by default
service = create_memory_service(
    instance_id="my-agent",
    db_path="./memories",
)

# Store
result = await service.remember(
    "User prefers TypeScript for web projects",
    tags=["preference", "coding"]
)

# Recall
results = await service.recall("What language for web?")
for r in results:
    print(f"{r.similarity_score:.2f}: {r.memory.content}")

# Correct
await service.correct(
    original_id=result.memory_id,
    corrected_content="User prefers TypeScript for web, Python for scripts"
)

Architecture

┌─────────────┐
│  Claude Code │──── MCP ────┐
└─────────────┘              │
┌─────────────┐              ▼
│  Codex CLI   │──── MCP ───► Tribal Memory Server
└─────────────┘              ▲  (localhost:18790)
┌─────────────┐              │
│  OpenClaw    │── plugin ───┘
└─────────────┘

The server is the single source of truth. Each agent connects as an instance. Memories are tagged with source_instance so you can see who learned what.


Features

  • Semantic search — Find memories by meaning, not keywords
  • Cross-agent sharing — Memories from one agent are available to all
  • Graph search — Entity extraction + relationship traversal
  • Graph visualizationBuilt-in web UI to explore your knowledge graph at /graph
  • Hybrid retrieval — Vector + BM25 keyword search combined
  • Zero cloud — Local ONNX embeddings via FastEmbed, no API keys needed
  • Batch ingestion — Store up to 1000 memories in a single request
  • Auto-temporal queries — "What happened last week?" auto-parses dates
  • Session indexing — Index conversation transcripts for search
  • Automatic deduplication — Won't store the same thing twice
  • Memory corrections — Update outdated information with audit trail
  • Temporal reasoning — Date extraction and time-based filtering
  • Import/export — Portable JSON bundles with embedding metadata
  • Token budgets — Smart context management to avoid LLM overload
  • MCP server — Native integration with Claude Code, Codex, and more
  • Benchmark tested100% accuracy on LoCoMo (1986 questions, all categories)

Privacy

Zero data leaves your machine:

  • Embeddings computed locally (FastEmbed + ONNX runtime)
  • Memories stored locally in LanceDB
  • No API keys, no cloud services, no telemetry

Development

git clone https://github.com/abbudjoe/TribalMemory.git
cd TribalMemory
pip install -e ".[dev]"

# Run tests
PYTHONPATH=src pytest

# Run linting
ruff check .
black --check .

License

Business Source License 1.1 — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tribalmemory-0.8.0.tar.gz (381.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tribalmemory-0.8.0-py3-none-any.whl (259.8 kB view details)

Uploaded Python 3

File details

Details for the file tribalmemory-0.8.0.tar.gz.

File metadata

  • Download URL: tribalmemory-0.8.0.tar.gz
  • Upload date:
  • Size: 381.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for tribalmemory-0.8.0.tar.gz
Algorithm Hash digest
SHA256 7b93acc9aff7d2586c00ffdd46c16fe0c2404e89e055cfd93686848e3502a93f
MD5 b470a0e6d15aea7e32d93a9fb01ddf10
BLAKE2b-256 4e3a55669b329aa0305b1d94b6ea4b3134ae5859b0fa4539d61f2517953b777a

See more details on using hashes here.

File details

Details for the file tribalmemory-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: tribalmemory-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 259.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for tribalmemory-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 36b785f6a1cc6c6c21e8b1635f21d6ea7b97af27a6aa7739b8626b8363db8ee8
MD5 d0fd013156aa241130f3ebd81b7587aa
BLAKE2b-256 3a21117f556cdd7ffda09a0e73329db4a483459b07e0ca578757af6d94897459

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page