Cross-agent semantic memory with knowledge graphs, fact extraction, and MCP integration
Project description
Lore — Cross-Agent Memory for AI
Persistent semantic memory that works with every MCP-compatible AI tool.
Agents store what they learn — other agents recall it. Knowledge graphs, fact extraction, auto-consolidation, and more. No API key required for basic use.
Why Lore?
- Local-first — SQLite by default, no server needed. Scale to Postgres + pgvector when ready.
- No API key required — local ONNX embeddings ship with the package. LLM features are opt-in.
- Single database — no Neo4j, Redis, or Qdrant dependency. Everything in one SQLite file or Postgres DB.
- 20 MCP tools — remember, recall, knowledge graph, fact extraction, consolidation, classification, and more.
- Opt-in intelligence — enrichment, classification, fact extraction, and knowledge graphs activate only when you configure an LLM.
Comparison
| Feature | Lore | Mem0 | Zep | Cognee |
|---|---|---|---|---|
| Local-first (no server) | Yes | No | No | No |
| MCP native | Yes | No | No | No |
| Knowledge graph | Yes | Yes* | Yes | Yes |
| Fact extraction | Yes | No | No | Yes |
| Auto-consolidation | Yes | No | Yes | No |
| Conflict resolution | Yes | No | No | No |
| Memory tiers | Yes | No | Yes | No |
| Dialog classification | Yes | No | No | No |
| Webhook ingestion | Yes | No | No | No |
| No external DB required | Yes | No** | No | No |
| PII masking | Yes | No | No | No |
* Mem0 requires Neo4j for graph features. ** Mem0 requires Qdrant or Redis.
Comparison as of March 2026. Lore focuses on being the MCP-native, local-first choice for agent memory.
Quick Start
1. Install (30 seconds)
pip install lore-sdk
2. Configure your AI tool (60 seconds)
Add to your MCP client config (e.g., Claude Desktop claude_desktop_config.json):
{
"mcpServers": {
"lore": {
"command": "uvx",
"args": ["lore-memory"],
"env": {
"LORE_PROJECT": "my-project"
}
}
}
}
See Setup Guides for Claude Desktop, Cursor, VS Code, Windsurf, ChatGPT, Cline, and Claude Code.
3. Try it (3 minutes)
Ask your AI assistant:
"Remember that our API uses REST with JSON responses and rate limits at 100 req/min"
Then ask:
"What do you know about our API?"
Lore's recall tool will be invoked automatically.
4. Enable LLM features (optional)
export LORE_ENRICHMENT_ENABLED=true
export LORE_LLM_PROVIDER=anthropic
export LORE_LLM_API_KEY=sk-ant-...
This enables auto-enrichment (topics, entities, sentiment), classification (intent, domain, emotion), and fact extraction on every remember() call.
5. Use the SDK directly
from lore import Lore
lore = Lore() # zero config — local SQLite, built-in embeddings
lore.remember(
"Stripe API returns 429 after 100 req/min — use exponential backoff",
tags=["stripe", "rate-limit"],
tier="long",
)
results = lore.recall("stripe rate limiting")
for r in results:
print(f"[{r.score:.2f}] {r.memory.content}")
Architecture
graph LR
A[MCP Client] -->|stdio| B[MCP Server]
B --> C[Lore SDK]
C --> D[Store<br/>SQLite / Postgres / HTTP]
C --> E[Embedder<br/>ONNX local]
C --> F[LLM Pipeline<br/>optional]
F --> F1[Enrich]
F --> F2[Classify]
F --> F3[Extract Facts]
F --> F4[Knowledge Graph]
F --> F5[Consolidate]
Pipeline: remember() → redact PII → embed → store → enrich → classify → extract facts → update graph
Recall: recall() → embed query → vector search → tier weighting → importance scoring → graph boost → return results
Features
v0.7.0 — Living Archive
on_this_day · verbatim recall · temporal filters
On This Day surfaces memories from the same calendar day across years. Verbatim Recall returns original words instead of AI summaries. Temporal Filters add date-range filtering to recall (year, month, days_ago, before/after, window presets).
Memory Management
remember · recall · forget · list_memories · stats · upvote · downvote
Core memory operations with semantic search, tier-based TTL (working/short/long), importance scoring, and automatic PII redaction.
Knowledge Graph
graph_query · entity_map · related
Entities and relationships extracted from memories, with hop-by-hop traversal. Graph-enhanced recall surfaces connected memories that pure vector search misses.
Fact Extraction & Conflicts
extract_facts · list_facts · conflicts
Atomic (subject, predicate, object) triples extracted from text. Automatic conflict detection when new facts contradict old ones — supersede, merge, or flag.
Intelligence Pipeline
classify · enrich · consolidate
LLM-powered classification (intent/domain/emotion), metadata enrichment (topics/entities/sentiment), and memory consolidation (merge duplicates, summarize clusters).
Import/Export
ingest · as_prompt · check_freshness · github_sync
Webhook-style ingestion with source tracking. Export memories formatted for LLM context injection. Git-based staleness detection. GitHub issue/PR sync.
Setup Guides
| Client | Guide |
|---|---|
| Claude Desktop | docs/setup-claude-desktop.md |
| Cursor | docs/setup-cursor.md |
| VS Code (Copilot) | docs/setup-vscode.md |
| Windsurf | docs/setup-windsurf.md |
| ChatGPT | docs/setup-chatgpt.md |
| Cline | docs/setup-cline.md |
| Claude Code | docs/setup-claude-code.md |
Docker
For team setups with Postgres + pgvector:
docker compose up -d
This starts Postgres with pgvector and the Lore HTTP server. Point your MCP client to http://localhost:8765.
# Production (with secure password)
cp .env.example .env # edit POSTGRES_PASSWORD
docker compose -f docker-compose.prod.yml up -d
API Reference
Performance
| Operation | Target |
|---|---|
remember() no LLM |
< 100ms |
recall() vector search (100 memories) |
< 50ms |
recall() vector search (10K memories) |
< 200ms |
recall() graph-enhanced (2-hop) |
< 500ms |
| Embedding generation (500 words) | < 200ms |
as_prompt() 100 memories |
< 100ms |
Migration from v0.5.x
v0.6.0 adds 13 new MCP tools (7 → 20), new database columns and tables, and opt-in LLM features. Existing installations work without changes — all new features are opt-in.
Examples
See examples/ for runnable scripts:
full_pipeline.py— remember, recall, tiers, prompt exportmcp_tool_tour.py— tour of all 20 MCP tool equivalentswebhook_ingestion.py— ingest with source trackingconsolidation_demo.py— memory consolidation
Contributing
git clone https://github.com/agentkitai/lore.git
cd lore
pip install -e ".[dev,mcp,enrichment]"
pytest
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lore_sdk-0.8.3.tar.gz.
File metadata
- Download URL: lore_sdk-0.8.3.tar.gz
- Upload date:
- Size: 786.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3e3e43602d35e7d06dd89a0feb16c26edad4c5052a00a043a87fd49e31a55e0d
|
|
| MD5 |
f8fd60ad212b0c2f66334b52932bb981
|
|
| BLAKE2b-256 |
a93377e2f54e2f9ec77dccd52c25f699003bdf55773d8b5c80bb7083b459e95c
|
File details
Details for the file lore_sdk-0.8.3-py3-none-any.whl.
File metadata
- Download URL: lore_sdk-0.8.3-py3-none-any.whl
- Upload date:
- Size: 177.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d80d3279d66f94383ef021950d7c8eab8e406ec60ca24dc2025fbac73e1b63ab
|
|
| MD5 |
04356ccc8469239c4cbe100d944d713b
|
|
| BLAKE2b-256 |
e9f8b41977a5833c4f035dd37c86f0c5ea9d2de9afd666f7b5a46d9cd3c08557
|