Skip to main content

Cross-agent semantic memory with knowledge graphs, fact extraction, and MCP integration

Project description

Lore — Cross-Agent Memory for AI

PyPI Python 3.10+ License: MIT MCP Compatible Tests

Persistent semantic memory that works with every MCP-compatible AI tool.

Agents store what they learn — other agents recall it. Knowledge graphs, fact extraction, auto-consolidation, and more. No API key required for basic use.

Why Lore?

  • Local-first — SQLite by default, no server needed. Scale to Postgres + pgvector when ready.
  • No API key required — local ONNX embeddings ship with the package. LLM features are opt-in.
  • Single database — no Neo4j, Redis, or Qdrant dependency. Everything in one SQLite file or Postgres DB.
  • 20 MCP tools — remember, recall, knowledge graph, fact extraction, consolidation, classification, and more.
  • Opt-in intelligence — enrichment, classification, fact extraction, and knowledge graphs activate only when you configure an LLM.

Comparison

Feature Lore Mem0 Zep Cognee
Local-first (no server) Yes No No No
MCP native Yes No No No
Knowledge graph Yes Yes* Yes Yes
Fact extraction Yes No No Yes
Auto-consolidation Yes No Yes No
Conflict resolution Yes No No No
Memory tiers Yes No Yes No
Dialog classification Yes No No No
Webhook ingestion Yes No No No
No external DB required Yes No** No No
PII masking Yes No No No

* Mem0 requires Neo4j for graph features. ** Mem0 requires Qdrant or Redis.

Comparison as of March 2026. Lore focuses on being the MCP-native, local-first choice for agent memory.

Quick Start

1. Install (30 seconds)

pip install lore-sdk

2. Configure your AI tool (60 seconds)

Add to your MCP client config (e.g., Claude Desktop claude_desktop_config.json):

{
  "mcpServers": {
    "lore": {
      "command": "uvx",
      "args": ["lore-memory"],
      "env": {
        "LORE_PROJECT": "my-project"
      }
    }
  }
}

See Setup Guides for Claude Desktop, Cursor, VS Code, Windsurf, ChatGPT, Cline, and Claude Code.

3. Try it (3 minutes)

Ask your AI assistant:

"Remember that our API uses REST with JSON responses and rate limits at 100 req/min"

Then ask:

"What do you know about our API?"

Lore's recall tool will be invoked automatically.

4. Enable LLM features (optional)

export LORE_ENRICHMENT_ENABLED=true
export LORE_LLM_PROVIDER=anthropic
export LORE_LLM_API_KEY=sk-ant-...

This enables auto-enrichment (topics, entities, sentiment), classification (intent, domain, emotion), and fact extraction on every remember() call.

5. Use the SDK directly

from lore import Lore

lore = Lore()  # zero config — local SQLite, built-in embeddings

lore.remember(
    "Stripe API returns 429 after 100 req/min — use exponential backoff",
    tags=["stripe", "rate-limit"],
    tier="long",
)

results = lore.recall("stripe rate limiting")
for r in results:
    print(f"[{r.score:.2f}] {r.memory.content}")

Full Quick Start Guide

Architecture

graph LR
    A[MCP Client] -->|stdio| B[MCP Server]
    B --> C[Lore SDK]
    C --> D[Store<br/>SQLite / Postgres / HTTP]
    C --> E[Embedder<br/>ONNX local]
    C --> F[LLM Pipeline<br/>optional]
    F --> F1[Enrich]
    F --> F2[Classify]
    F --> F3[Extract Facts]
    F --> F4[Knowledge Graph]
    F --> F5[Consolidate]

Pipeline: remember() → redact PII → embed → store → enrich → classify → extract facts → update graph

Recall: recall() → embed query → vector search → tier weighting → importance scoring → graph boost → return results

Full Architecture Documentation

Features

v0.7.0 — Living Archive

on_this_day · verbatim recall · temporal filters

On This Day surfaces memories from the same calendar day across years. Verbatim Recall returns original words instead of AI summaries. Temporal Filters add date-range filtering to recall (year, month, days_ago, before/after, window presets).

Memory Management

remember · recall · forget · list_memories · stats · upvote · downvote

Core memory operations with semantic search, tier-based TTL (working/short/long), importance scoring, and automatic PII redaction.

Knowledge Graph

graph_query · entity_map · related

Entities and relationships extracted from memories, with hop-by-hop traversal. Graph-enhanced recall surfaces connected memories that pure vector search misses.

Fact Extraction & Conflicts

extract_facts · list_facts · conflicts

Atomic (subject, predicate, object) triples extracted from text. Automatic conflict detection when new facts contradict old ones — supersede, merge, or flag.

Intelligence Pipeline

classify · enrich · consolidate

LLM-powered classification (intent/domain/emotion), metadata enrichment (topics/entities/sentiment), and memory consolidation (merge duplicates, summarize clusters).

Import/Export

ingest · as_prompt · check_freshness · github_sync

Webhook-style ingestion with source tracking. Export memories formatted for LLM context injection. Git-based staleness detection. GitHub issue/PR sync.

Setup Guides

Client Guide
Claude Desktop docs/setup-claude-desktop.md
Cursor docs/setup-cursor.md
VS Code (Copilot) docs/setup-vscode.md
Windsurf docs/setup-windsurf.md
ChatGPT docs/setup-chatgpt.md
Cline docs/setup-cline.md
Claude Code docs/setup-claude-code.md

All Setup Guides

Docker

For team setups with Postgres + pgvector:

docker compose up -d

This starts Postgres with pgvector and the Lore HTTP server. Point your MCP client to http://localhost:8765.

# Production (with secure password)
cp .env.example .env  # edit POSTGRES_PASSWORD
docker compose -f docker-compose.prod.yml up -d

Self-Hosted Guide

API Reference

Full API Reference

Performance

Operation Target
remember() no LLM < 100ms
recall() vector search (100 memories) < 50ms
recall() vector search (10K memories) < 200ms
recall() graph-enhanced (2-hop) < 500ms
Embedding generation (500 words) < 200ms
as_prompt() 100 memories < 100ms

Benchmark Results

Migration from v0.5.x

v0.6.0 adds 13 new MCP tools (7 → 20), new database columns and tables, and opt-in LLM features. Existing installations work without changes — all new features are opt-in.

Migration Guide

Examples

See examples/ for runnable scripts:

Contributing

git clone https://github.com/agentkitai/lore.git
cd lore
pip install -e ".[dev,mcp,enrichment]"
pytest

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lore_sdk-0.8.2.tar.gz (783.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lore_sdk-0.8.2-py3-none-any.whl (174.5 kB view details)

Uploaded Python 3

File details

Details for the file lore_sdk-0.8.2.tar.gz.

File metadata

  • Download URL: lore_sdk-0.8.2.tar.gz
  • Upload date:
  • Size: 783.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for lore_sdk-0.8.2.tar.gz
Algorithm Hash digest
SHA256 fa4e4ec98e966c9e2a2185b1c1d70a78356a37d43a1357b5e5b392fd0b9aa2a7
MD5 c034eb37043ff2ca9370dfd28b59e335
BLAKE2b-256 9f3cda0fdac16c035785ceed3e9a6b2e9dfd450487d8276214139e7ccf51563c

See more details on using hashes here.

File details

Details for the file lore_sdk-0.8.2-py3-none-any.whl.

File metadata

  • Download URL: lore_sdk-0.8.2-py3-none-any.whl
  • Upload date:
  • Size: 174.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for lore_sdk-0.8.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2a6a9a21bb4ea88c5b21d63e21bd590ea45ee88dbac00cc9743100ee56a8910c
MD5 85912d221ee1ba333cbc27de43dddb7f
BLAKE2b-256 18d227e1669ae0e1bcf1a4f07dc68d261d049b9708742f9074cab86ec6e32126

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page