Skip to main content

Universal AI memory layer. MCP-native. Self-hosted.

Project description

Lore

Tests License Python

Give your AI a brain. Universal memory layer for AI agents. MCP-native. Self-hosted. One docker compose up and your AI remembers everything.


Quickstart (< 2 minutes)

# 1. Start Lore
git clone https://github.com/amitpaz1/lore.git && cd lore
docker compose up -d

# 2. Initialize your org + get an API key
curl -s -X POST http://localhost:8765/v1/org/init \
  -H "Content-Type: application/json" -d '{"name": "my-org"}' | python3 -m json.tool

# 3. Add this to your Claude Desktop config (see below)

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "lore": {
      "command": "python",
      "args": ["-m", "lore.mcp"],
      "env": {
        "LORE_PROJECT": "my-project"
      }
    }
  }
}

Restart Claude Desktop. Done. Claude can now remember and recall information across conversations.


What Is Lore?

Lore gives AI agents persistent memory. Your AI learns something? It remembers it forever. Next conversation, next agent, next week — the knowledge is there.

5 MCP tools your AI gets:

Tool What it does Example
remember Store a memory "Remember that Stripe rate-limits at 100 req/min"
recall Semantic search "What do we know about rate limiting?"
forget Delete memories "Forget the outdated deployment notes"
list Browse memories "Show me all lessons tagged 'postgres'"
stats Memory statistics "How many memories do we have?"

Architecture

┌─────────────────────────────────────────────────────┐
│              AI Clients                              │
│  Claude Desktop · Cursor · Windsurf · Custom Agents  │
└──────────────────────┬──────────────────────────────┘
                       │ MCP (stdio)
┌──────────────────────▼──────────────────────────────┐
│              Lore MCP Server                         │
│  ┌────────┐ ┌──────┐ ┌──────┐ ┌────┐ ┌─────┐      │
│  │remember│ │recall│ │forget│ │list│ │stats│      │
│  └───┬────┘ └──┬───┘ └──┬───┘ └─┬──┘ └──┬──┘      │
│      └─────────┴────────┴───────┴───────┘           │
│                     │                                │
│         ┌───────────┼───────────┐                    │
│         ▼           ▼           ▼                    │
│   ┌──────────┐ ┌─────────┐ ┌────────┐              │
│   │ Embedder │ │ Storage │ │Redactor│              │
│   │(MiniLM)  │ │(SQLite/ │ │(opt-in)│              │
│   │ 384-dim  │ │ Postgres│ │        │              │
│   └──────────┘ └─────────┘ └────────┘              │
└──────────────────────────────────────────────────────┘
                       │
        ┌──────────────┼──────────────┐
        ▼                             ▼
 ┌──────────────┐            ┌──────────────────┐
 │ Local Mode   │            │ Server Mode      │
 │ SQLite       │            │ PostgreSQL +     │
 │ Zero config  │            │ pgvector         │
 │ Single user  │            │ Multi-tenant     │
 └──────────────┘            │ REST API         │
                             └──────────────────┘

Two modes:

  • Local mode (default): SQLite + embedded ONNX model. Zero config. Perfect for single-user Claude Desktop.
  • Server mode: PostgreSQL + pgvector. Multi-tenant, API keys, shared across teams. Use with docker compose up.

MCP Setup

Claude Desktop

{
  "mcpServers": {
    "lore": {
      "command": "python",
      "args": ["-m", "lore.mcp"],
      "env": {
        "LORE_PROJECT": "my-project"
      }
    }
  }
}

Cursor

Add to .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "lore": {
      "command": "python",
      "args": ["-m", "lore.mcp"],
      "env": {
        "LORE_PROJECT": "my-project"
      }
    }
  }
}

Remote Mode (shared server)

Point the MCP client at your Lore server instead of using local SQLite:

{
  "mcpServers": {
    "lore": {
      "command": "python",
      "args": ["-m", "lore.mcp"],
      "env": {
        "LORE_STORE": "remote",
        "LORE_API_URL": "http://localhost:8765",
        "LORE_API_KEY": "lore_sk_..."
      }
    }
  }
}

See examples/ for ready-to-paste config files.


Install

pip install lore-sdk

With MCP support:

pip install lore-sdk[mcp]

With server dependencies:

pip install lore-sdk[server]

REST API Reference

All endpoints require Authorization: Bearer lore_sk_... header.

Memories

Method Endpoint Description
POST /v1/memories Create a memory (server embeds automatically)
GET /v1/memories List memories (paginated, filterable)
GET /v1/memories/search?q=... Semantic search
GET /v1/memories/{id} Get a single memory
DELETE /v1/memories/{id} Delete a memory
DELETE /v1/memories?confirm=true Bulk delete with filters
GET /v1/stats Memory store statistics

Create a memory

curl -X POST http://localhost:8765/v1/memories \
  -H "Authorization: Bearer lore_sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Stripe rate-limits at 100 req/min. Use exponential backoff.",
    "type": "lesson",
    "tags": ["stripe", "rate-limit"],
    "project": "payments"
  }'

Search memories

curl "http://localhost:8765/v1/memories/search?q=rate+limiting&limit=5" \
  -H "Authorization: Bearer lore_sk_..."

Organization setup

# First-run: create org and get API key
curl -X POST http://localhost:8765/v1/org/init \
  -H "Content-Type: application/json" \
  -d '{"name": "my-org"}'
# Returns: {"org_id": "...", "api_key": "lore_sk_...", "key_prefix": "lore_sk_..."}

Self-Hosted Deployment

Docker Compose (recommended)

git clone https://github.com/amitpaz1/lore.git && cd lore

# Development
docker compose up -d

# Production (with secure password)
echo "POSTGRES_PASSWORD=$(openssl rand -hex 16)" > .env
docker compose -f docker-compose.prod.yml up -d

The stack includes:

  • Lore server on port 8765
  • PostgreSQL 16 + pgvector for storage and vector search
  • Health checks, auto-restart, resource limits (production)

Environment Variables

Variable Default Description
DATABASE_URL PostgreSQL connection string (server mode)
LORE_STORE local local (SQLite) or remote (HTTP to server)
LORE_PROJECT Default project scope
LORE_API_URL Server URL (remote mode)
LORE_API_KEY API key (remote mode)
LORE_DB_PATH ~/.lore/default.db SQLite path (local mode)
LORE_MODEL_DIR ~/.lore/models Embedding model cache
LORE_REDACT false Enable PII redaction

Why Lore?

Lore Mem0 Zep DIY pgvector
MCP native Yes No No No
Self-hosted Yes Paid cloud Paid cloud Yes
Setup time 2 min Account signup Account signup Hours
Local mode Yes (SQLite) No No No
Embedding Built-in (ONNX) API-dependent Built-in DIY
Multi-tenant Yes Yes Yes DIY
Cost Free $99+/mo $99+/mo Free + time
Vendor lock-in None High High None

Lore is the only memory layer that's:

  1. MCP-native — works directly with Claude Desktop, Cursor, Windsurf
  2. Zero-config local modepip install lore-sdk and go, no server needed
  3. Self-hosted — your data stays on your machine or your infra
  4. Open source — MIT licensed, no usage limits, no telemetry

How It Works

Lore uses semantic search powered by a local ONNX embedding model (all-MiniLM-L6-v2, 384 dimensions). No API calls, no data leaves your machine.

Storing a memory:

  1. Content comes in via MCP tool or REST API
  2. Text is embedded into a 384-dim vector (local ONNX, ~200ms)
  3. Memory + embedding stored in SQLite (local) or PostgreSQL (server)
  4. Optional PII redaction runs before embedding

Recalling memories:

  1. Query text is embedded using the same model
  2. Cosine similarity search against stored embeddings
  3. Results ranked by: similarity × time_decay (newer memories score higher)
  4. Filtered by type, tags, project as requested

Python SDK

from lore import Lore

client = Lore()  # local mode — zero config

# Store
client.remember(
    content="Stripe rate-limits at 100 req/min. Use exponential backoff.",
    type="lesson",
    tags=["stripe", "rate-limit"],
)

# Search
results = client.recall("stripe rate limiting", limit=5)

# List
memories = client.list(type="lesson", project="payments")

# Stats
stats = client.stats()

Remote mode

from lore import Lore

client = Lore(
    store="remote",
    api_url="http://localhost:8765",
    api_key="lore_sk_...",
)

Features

  • Semantic search — find memories by meaning, not keywords
  • Local-first — SQLite + ONNX embeddings, no server needed
  • Multi-tenant server — PostgreSQL + pgvector, API key auth
  • MCP native — 5 tools for Claude Desktop, Cursor, Windsurf
  • Memory types — note, lesson, snippet, fact, conversation, decision
  • Project scoping — isolate memories by project
  • Tag filtering — organize with tags, filter on recall
  • Time decay — newer memories rank higher in search
  • PII redaction — opt-in scrubbing of API keys, emails, IPs, etc.
  • REST API — full CRUD + search, OpenAPI docs at /docs

Guides

Guide Description
Claude Desktop Setup Step-by-step Claude Desktop integration
Cursor Setup Cursor IDE integration
Windsurf Setup Windsurf (Codeium) integration
Self-Hosted Deployment Run on your own infrastructure
Docker Guide Docker deployment details
CLI Usage Command-line interface guide
Python SDK Python SDK with code examples
TypeScript SDK TypeScript SDK guide
Publishing How to publish to PyPI and npm

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

# Development setup
git clone https://github.com/amitpaz1/lore.git && cd lore
pip install -e ".[dev,server,mcp,cli]"
pytest

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lore_sdk-0.4.1.tar.gz (614.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lore_sdk-0.4.1-py3-none-any.whl (60.7 kB view details)

Uploaded Python 3

File details

Details for the file lore_sdk-0.4.1.tar.gz.

File metadata

  • Download URL: lore_sdk-0.4.1.tar.gz
  • Upload date:
  • Size: 614.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for lore_sdk-0.4.1.tar.gz
Algorithm Hash digest
SHA256 4416627c64f70d5737cfaeb6702f51fabe8cd1b92832ea272e74589f60f7e720
MD5 561c82031ab9f7c433dd0aa4976a0cbc
BLAKE2b-256 c397524ad6946878ef850c457b450611c660651b4c9d0b38e31ab85984325d2f

See more details on using hashes here.

File details

Details for the file lore_sdk-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: lore_sdk-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 60.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for lore_sdk-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a7de864bd2e21df0749f9a3cb2ed9766f7413e16b4091caaf09c8ebb66806e03
MD5 fde64a99b41e6d92cd78f740f22f1bce
BLAKE2b-256 d665d2ae24b1b5e20404cf054094a370f555f6724e9108125e020077fb81c5f3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page