Shared cognitive substrate for AI agents (local-first, Markdown-native, MCP)
Project description
Lithos
Shared memory for AI agents.
A local, privacy-first knowledge base that enables heterogeneous AI agents to share knowledge and coordinate work.
The Problem
When agents cannot share what they know, every agent starts from zero. Work is duplicated, discoveries are lost, and coordination breaks down. Lithos solves this by providing a persistent, shared knowledge layer that compounds in value over time.
What It Is
Lithos is an MCP server that provides a shared knowledge store for AI agents running on your local infrastructure. Knowledge is stored as human-readable Markdown files (compatible with Obsidian) while providing fast full-text and semantic search for agents.
Who It's For
Lithos is the Knowledge Layer for teams running AI agents in production.
Just as Alation coined the term "Knowledge Layer" for enterprise data governance, Lithos provides the equivalent for AI agent systems: a structured, searchable, shared memory that compounds in value the more it is used. Each agent interaction enriches the knowledge base, making every subsequent agent smarter and faster.
- Teams running multiple AI agents (Agent Zero, OpenClaw, Claude Code, custom agents)
- Developers who want agents to share discoveries and avoid duplicate work
- Anyone who needs agent knowledge to be inspectable and version-controlled
Key Features
- 📁 Markdown-first: All knowledge stored as Obsidian-compatible
.mdfiles - 🔍 Fast search: Tantivy full-text + ChromaDB semantic search
- 🕸️ Knowledge graph: NetworkX-powered relationships via
[[wiki-links]] - 🤝 Multi-agent coordination: Task claiming, findings sharing, status tracking
- 🧠 Research cache: One-call freshness check so agents skip redundant research — returns hit/miss/stale with update guidance
- 🔗 URL deduplication: Automatic detection and prevention of duplicate notes from the same source URL
- 🧬 Provenance tracking: Declare which notes a synthesis is derived from and query lineage across the knowledge base
- 🔌 MCP interface: Works with any MCP-compatible agent or tool
- 🏠 Local & private: No cloud dependencies, you own your data
Quickstart
Agent Zero
Agent Zero running inside docker on the same machine running lithos
{
"mcpServers": {
"lithos": {
"url": "http://host.docker.internal:8765/sse"
}
}
}
OpenClaw
Update mcporter.json Probably in ~/.openclaw/workspace/config/mcporter.json
Use localhost if running on the same machine as OpenClaw, otherwise the name or IP address of the server.
{
"mcpServers": {
"lithos": {
"baseUrl": "http://<your hostname>:8765/sse"
}
},
"imports": []
}
Claude code
claude mcp add --transport sse lithos http://localhost:8765/sse
Documentation
- CLI Reference — installing and using the
lithoscommand-line tool - Specification — full technical specification
- LCMA Design — design notes
Tech Stack
| Component | Technology |
|---|---|
| Storage | Markdown + YAML frontmatter |
| Full-text search | Tantivy |
| Semantic search | ChromaDB + sentence-transformers |
| Knowledge graph | NetworkX |
| Agent interface | MCP (FastMCP) |
| File sync | watchdog |
Development Commands
# Install dependencies (uses uv; the ``dev`` dependency group is installed
# by default via uv, so no extra flag is required).
uv sync
# Run unit tests
uv run pytest -m "not integration" tests/ -q
# Run integration tests
uv run pytest -m integration tests/ -q
# Run all tests with coverage
uv run pytest tests/ --cov=lithos --cov-report=xml
# Lint
uv run ruff check .
# Format check
uv run ruff format --check src/ tests/
# Type check
uv run pyright src/
# Auto-fix lint + format
uv run ruff check --fix . && uv run ruff format src/ tests/
# Start server (stdio)
uv run lithos serve
# Start server (SSE)
uv run lithos serve --transport sse --port 8765
# Docker
cd docker && docker compose up -d --build
# run pointing at data dir
LITHOS_DATA_PATH="<DATA DIR PATH>" docker compose up -d --build
# stop
cd docker && docker compose down
Docker: running multiple environments
Lithos ships with docker/run.sh, a thin wrapper around docker compose
that drives each environment from its own gitignored .env.<name> file
and a distinct compose project name (-p lithos-<name>). This lets you
run prod, staging, and fuzz side-by-side on one host without
container name, port, or volume collisions.
Set up env files
Create one file per environment under docker/:
docker/.env.prod
LITHOS_ENVIRONMENT=production
LITHOS_DATA_PATH=/path/to/lithos/data
LITHOS_HOST_PORT=8765
LITHOS_CONTAINER_NAME=lithos
docker/.env.staging
LITHOS_ENVIRONMENT=staging
LITHOS_DATA_PATH=/path/to/lithos/data-staging
LITHOS_HOST_PORT=8766
LITHOS_CONTAINER_NAME=lithos-staging
docker/.env.fuzz
LITHOS_ENVIRONMENT=fuzz
LITHOS_DATA_PATH=/path/to/lithos/data-fuzz
LITHOS_HOST_PORT=8767
LITHOS_CONTAINER_NAME=lithos-fuzz
LITHOS_ENVIRONMENT becomes the OTEL deployment.environment resource
attribute, so metrics, traces, and logs are labelled per environment in
your observability stack.
Use the launcher
cd docker
./run.sh prod # build & start production (default action = up)
./run.sh staging up # same, explicit
./run.sh fuzz logs # follow container logs
./run.sh staging status # show running containers for this stack
./run.sh prod down # stop & remove the stack
./run.sh fuzz restart # down + up
Each environment gets its own container (lithos, lithos-staging,
lithos-fuzz), its own host port, and its own data volume, so they can
all run concurrently. Running ./run.sh with no arguments prints usage.
Telemetry & Observability
Lithos emits OpenTelemetry metrics, traces, and logs when telemetry is enabled.
The only supported export path is OTLP/HTTP push to a collector — there
is no /metrics scrape endpoint on the Lithos process itself (see closed
issue #164 for the
rationale).
How metrics reach your dashboards
Lithos process
│ OTLP/HTTP (push every export_interval_ms, default 30 s)
▼
OTEL Collector ← lithos-observability/otel-collector/config.yml
│ Prometheus exporter on :8889
▼
Prometheus ← lithos-observability/prometheus/prometheus.yml
│
▼
Grafana
Traces fan out to Tempo, logs to Loki, via the same collector.
Configuration
telemetry:
enabled: false # master switch
endpoint: null # OTLP base URL, e.g. http://otel-collector:4318
console_fallback: false # print spans/metrics to stdout when no endpoint
service_name: lithos
environment: null # becomes OTEL deployment.environment
export_interval_ms: 30000
Environment variables override endpoint per signal when needed:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT,
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT.
Local debugging without a collector
Pass --telemetry-console to lithos serve to route metrics and spans to
stdout via console exporters. This is equivalent to setting
telemetry.enabled=true + telemetry.console_fallback=true in config, and is
the shortest path to "is my instrumentation even firing?" when no collector is
running.
lithos --data-dir ./data serve --telemetry-console
Running the full observability stack locally
See lithos-observability/ for a one-command Docker Compose stack (OTEL
Collector + Prometheus + Grafana + Tempo + Loki). Point Lithos at it with:
LITHOS_TELEMETRY__ENABLED=true \
LITHOS_TELEMETRY__ENDPOINT=http://localhost:4318 \
lithos serve
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lithos_mcp-0.2.1.tar.gz.
File metadata
- Download URL: lithos_mcp-0.2.1.tar.gz
- Upload date:
- Size: 668.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b4e64fa3c31f829bfbdfd3e523b8d076296b0e07af751f8981305de098dd353
|
|
| MD5 |
86169e5b6b00f16362781ad1e9118b49
|
|
| BLAKE2b-256 |
2563dabdb9a117597cea5552324508969d51b3cdfe03621a98cf7867f25765eb
|
Provenance
The following attestation bundles were made for lithos_mcp-0.2.1.tar.gz:
Publisher:
pypi.yml on agent-lore/lithos
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
lithos_mcp-0.2.1.tar.gz -
Subject digest:
3b4e64fa3c31f829bfbdfd3e523b8d076296b0e07af751f8981305de098dd353 - Sigstore transparency entry: 1338733680
- Sigstore integration time:
-
Permalink:
agent-lore/lithos@cd25097acb6ea4bd3b594884a1b90d50be0fc427 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/agent-lore
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi.yml@cd25097acb6ea4bd3b594884a1b90d50be0fc427 -
Trigger Event:
release
-
Statement type:
File details
Details for the file lithos_mcp-0.2.1-py3-none-any.whl.
File metadata
- Download URL: lithos_mcp-0.2.1-py3-none-any.whl
- Upload date:
- Size: 150.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
823870b742f826cd3584c22fc654814894b8040bc93b394fc7a79996ec707ccc
|
|
| MD5 |
fd8b2807145165917eb1ea48983a6f22
|
|
| BLAKE2b-256 |
9186b098ecff88041874ecfc6f85258d128ab5c04cd2a595b66e6e1efd787a74
|
Provenance
The following attestation bundles were made for lithos_mcp-0.2.1-py3-none-any.whl:
Publisher:
pypi.yml on agent-lore/lithos
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
lithos_mcp-0.2.1-py3-none-any.whl -
Subject digest:
823870b742f826cd3584c22fc654814894b8040bc93b394fc7a79996ec707ccc - Sigstore transparency entry: 1338733743
- Sigstore integration time:
-
Permalink:
agent-lore/lithos@cd25097acb6ea4bd3b594884a1b90d50be0fc427 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/agent-lore
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi.yml@cd25097acb6ea4bd3b594884a1b90d50be0fc427 -
Trigger Event:
release
-
Statement type: