Shared memory infrastructure for multi-instance AI agents
Project description
Tribal Memory
Your AI tools don't share a brain. Tribal Memory gives them one.
One memory store, many agents. Teach Claude Code something — Codex already knows it. That's not just persistence — it's cross-agent intelligence.
Claude Code stores architecture decisions → Codex recalls them instantly
Why
Every AI coding assistant starts fresh. Claude Code doesn't know what you told Codex. Codex doesn't know what you told Claude. You repeat yourself constantly.
Tribal Memory is a shared memory server that any AI agent can connect to. Store a memory from one agent, recall it from another. It just works.
Install
macOS:
# Install uv (https://docs.astral.sh/uv/)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Restart your terminal, or run:
source ~/.zshrc
# Install tribalmemory
uv tool install tribalmemory
Why uv? macOS blocks
pip installinto the system Python with "externally-managed-environment" errors.uv tool installhandles isolated environments automatically.
Linux:
pip install tribalmemory
# Or with uv:
# curl -LsSf https://astral.sh/uv/install.sh | sh
# source ~/.bashrc
# uv tool install tribalmemory
Quick Start
Option A: Local Mode (Zero Cloud, Zero Cost)
No API keys. No cloud. Everything runs on your machine.
# Set up with local Ollama embeddings
tribalmemory init --local
# Pull an embedding model (one time)
ollama pull nomic-embed-text
# Start the server
tribalmemory serve
Option B: OpenAI Embeddings
# Set up with OpenAI
export OPENAI_API_KEY=sk-...
tribalmemory init
# Start the server
tribalmemory serve
Server runs on http://localhost:18790.
Claude Code Integration (MCP)
# Auto-configure Claude Code
tribalmemory init --local --claude-code
# Or manually add to your Claude Code MCP config:
{
"mcpServers": {
"tribal-memory": {
"command": "tribalmemory-mcp"
}
}
}
Now Claude Code has persistent memory across sessions:
You: Remember that the auth service uses JWT with RS256
Claude: ✅ Stored.
--- next session ---
You: How does the auth service work?
Claude: Based on my memory, the auth service uses JWT with RS256...
Architecture
┌─────────────┐
│ Claude Code │──── MCP ────┐
└─────────────┘ │
┌─────────────┐ ▼
│ Codex CLI │──── MCP ───► Tribal Memory Server
└─────────────┘ ▲ (localhost:18790)
┌─────────────┐ │
│ OpenClaw │── plugin ───┘
└─────────────┘
The server is the single source of truth. Each agent connects as an instance. Memories are tagged with source_instance so you can see who learned what.
Features
- Semantic search — Find memories by meaning, not keywords
- Cross-agent sharing — Memories from one agent are available to all
- Automatic deduplication — Won't store the same thing twice
- Memory corrections — Update outdated information with audit trail
- Import/export — Portable JSON bundles with embedding metadata
- Token budgets — Smart context management to avoid LLM overload
- Local-only mode — Ollama + LanceDB = zero data leaves your machine
- MCP server — Native integration with Claude Code and compatible tools
MCP Tools
When connected via MCP, your AI gets these tools:
| Tool | Description |
|---|---|
tribal_remember |
Store a new memory with deduplication |
tribal_recall |
Search memories by semantic similarity |
tribal_correct |
Update/correct an existing memory |
tribal_forget |
Delete a memory (soft delete) |
tribal_stats |
Get memory statistics |
tribal_export |
Export memories to portable JSON |
tribal_import |
Import memories from a bundle |
Configuration
Generated by tribalmemory init. Lives at ~/.tribal-memory/config.yaml:
instance_id: my-agent
embedding:
provider: openai
model: nomic-embed-text # or text-embedding-3-small
api_base: http://localhost:11434/v1 # Ollama (omit for OpenAI)
dimensions: 768 # 768 for nomic, 1536 for OpenAI
# api_key not needed for local Ollama
db:
provider: lancedb
path: ~/.tribal-memory/lancedb
server:
host: 127.0.0.1
port: 18790
Environment Variables
| Variable | Description |
|---|---|
OPENAI_API_KEY |
OpenAI API key (not needed for local mode) |
TRIBAL_MEMORY_CONFIG |
Path to config file (default: ~/.tribal-memory/config.yaml) |
TRIBAL_MEMORY_INSTANCE_ID |
Override instance ID |
TRIBAL_MEMORY_EMBEDDING_API_BASE |
Override embedding API base URL |
Python API
from tribalmemory.services import create_memory_service
# Local embeddings
service = create_memory_service(
instance_id="my-agent",
db_path="./memories",
api_base="http://localhost:11434/v1",
embedding_model="nomic-embed-text",
embedding_dimensions=768,
)
# Store
result = await service.remember(
"User prefers TypeScript for web projects",
tags=["preference", "coding"]
)
# Recall
results = await service.recall("What language for web?")
for r in results:
print(f"{r.similarity_score:.2f}: {r.memory.content}")
# Correct
await service.correct(
original_id=result.memory_id,
corrected_content="User prefers TypeScript for web, Python for scripts"
)
Demo
See cross-agent memory sharing in action:
# Start the server
tribalmemory serve
# Run the interactive demo
./demo.sh
See docs/demo-output.md for example output.
HTTP API
All endpoints are under the /v1 prefix.
# Store a memory
curl -X POST http://localhost:18790/v1/remember \
-H "Content-Type: application/json" \
-d '{"content": "The database uses Postgres 16", "tags": ["infra"]}'
# Search memories
curl -X POST http://localhost:18790/v1/recall \
-H "Content-Type: application/json" \
-d '{"query": "what database", "limit": 5}'
# Get stats
curl http://localhost:18790/v1/stats
# Health check
curl http://localhost:18790/v1/health
OpenClaw Integration
Tribal Memory includes a plugin for OpenClaw:
openclaw plugins install ./extensions/memory-tribal
openclaw config set plugins.slots.memory=memory-tribal
Development
git clone https://github.com/abbudjoe/TribalMemory.git
cd TribalMemory
pip install -e ".[dev]"
# Run tests
PYTHONPATH=src pytest
# Run linting
ruff check .
black --check .
Privacy
In local mode (Ollama + LanceDB), zero data leaves your machine:
- Embeddings computed locally by Ollama
- Memories stored locally in LanceDB
- No API keys, no cloud services, no telemetry
License
Apache 2.0 — see LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tribalmemory-0.3.0.tar.gz.
File metadata
- Download URL: tribalmemory-0.3.0.tar.gz
- Upload date:
- Size: 154.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b467c1511ee51dbe8d177b94eb754ac6ebff428edd5e5876a855f7def9016b83
|
|
| MD5 |
bbc3301f29dfebe03f357b4852bdeee3
|
|
| BLAKE2b-256 |
46f7fbf22fa6f0a2671a321444071649cc94731aebc17bdc923f1e8d630e1565
|
File details
Details for the file tribalmemory-0.3.0-py3-none-any.whl.
File metadata
- Download URL: tribalmemory-0.3.0-py3-none-any.whl
- Upload date:
- Size: 112.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
46d72a9cc5ba0d8cc5aa84ce6f98087a5eb50a7eb455cc32a600259ae838d59f
|
|
| MD5 |
a66b398fb06d10fcf5ce92e7f9fb66bb
|
|
| BLAKE2b-256 |
31fa8fc94688aa72112311c1ad3a94df5b63231e8c81290408c7603d8df399b9
|