Skip to main content

Persistent AI memory for LLMs and AI agents. Local-first. Learns from every interaction.

Project description


LoreMem — Persistent Memory for AI Agents

PyPI    Downloads    Python    CI    License



1. Install

pip install loremem-ai

That's it. Python 3.9+. Includes sentence-transformers for semantic search.


2. Use

from lore_memory import Memory

m = Memory()
m.store("I live in Amsterdam and work at Google")
m.store("I love Python and hate Java")

m.query("where do I work?")  #> Google (conf=0.867)

m.store("I moved to Berlin")
m.query("where do I live?")  #> Berlin — Amsterdam auto-superseded

3. Connect to your AI tool

One config. Works with Claude, Cursor, Windsurf, or any MCP client.

{
  "mcpServers": {
    "lore-memory": {
      "command": "python3",
      "args": ["/path/to/lore-memory/mcp/server.py"]
    }
  }
}
Tool Where to put it
Claude Desktop ~/Library/Application Support/Claude/claude_desktop_config.json
Claude Code .mcp.json in project root
Cursor .cursor/mcp.json in project root
Windsurf ~/.codeium/windsurf/mcp_config.json

Your AI now remembers everything across conversations.



Why LoreMem


local-first

Local-First
SQLite + sentence-transformers.
No API keys. No cloud. No cost.



no LLM

Grammar Extraction
Parses by sentence structure.
No regex. No dictionaries. No LLM.



7 channels

Self-Learning
7 retrieval channels adapt via
feedback and Hebbian learning.



<50ms

Fast
Sub-50ms at 10K facts.
~20ms at 1K. No network calls.



isolation

User Isolation
Separate database per user.
Zero data leakage.



offline

Offline
Everything local. No telemetry.
Your data never leaves.



LoreMem Cloud alternatives
Requires LLM No Yes
Cost Free $19–249/mo
Works offline Yes No
Extraction Grammar-based LLM-dependent
Self-learning 7 mechanisms Limited
User isolation Physical (file-per-user) API-level


How It Works

Store
Grammar extraction
Recall
7-channel retrieval
Learn
Adaptive improvement

Store — text in, structured facts out
  "I live in Amsterdam and work at Google"
           │                        │
           ▼                        ▼
   (user, live_in, Amsterdam)    (user, work_at, Google)

Parses English by grammar position. No verb dictionaries, no regex, no LLM. Raw text is always FTS5-indexed as a fallback.

Recall — 7 scoring channels, fused into one ranked result
Channel What it does
Semantic Cosine similarity (embeddings)
Keyword BM25-style term overlap (FTS5)
Temporal Exponential recency decay
Belief Bayesian posterior (evidence + contradictions)
Frequency Log-scaled access count
Graph Spreading activation, 3-hop
Resonance Co-activation frequency

Weights adapt automatically through feedback.

Learn — gets better the more you use it
m.feedback(results[0].id, helpful=True)   # adapt channel weights
m.consolidate()                           # decay + replay + archive
Mechanism Effect
Adaptive weights Channels shift toward what works
Hebbian synapses Co-retrieved facts strengthen links
Memory replay Active memories resist decay
Ebbinghaus forgetting Unused facts fade over time
Contradiction resolution New facts supersede old ones


Benchmarks

Actual runs on Apple M-series, Python 3.9. Reproduce: python benchmarks/lore_bench.py

Test Suite — 138 tests

Capability Pass
Correction chains 10/10
Negation & retraction 5/8
Memory decay 5/5
Self-learning 2/2
User isolation 100/100
Grammar extraction 10/10
Scale & latency 3/3
Overall 135/138

Latency — per operation

Facts stored Recall p50 Write
100 14ms 8.4ms
1,000 21ms 7.9ms
5,000 33ms 7.7ms
10,000 50ms 7.6ms

Hash embeddings. Real embeddings add ~7ms/write.

[!NOTE] Negation detection (62%) is a known limitation. Phrases like "I can't stand X" and "I stopped doing X" are not yet reliably parsed.



API Reference

Core API
m = Memory(user_id="alice", org_id="acme", data_dir="~/.lore-memory")

m.store(text, scope="private")         # Store from natural language
m.query(query, limit=10)               # 7-channel retrieval
m.forget(memory_id=...)                # Delete by ID
m.forget(subject="alice")              # Delete by subject
m.forget_all()                         # Purge all user data
m.close()                              # Persist and close
Advanced API
m.store_triple("alice", "works_at", "Google", confidence=0.9)
m.profile()                            # All facts by predicate
m.profile_compact(max_tokens=200)      # Token-budgeted LLM context
m.feedback(memory_id, helpful=True)    # Drive adaptive learning
m.consolidate()                        # Decay + replay + archive
m.stats()                              # Memory counts by scope
Context manager
with Memory(user_id="alice") as m:
    m.store("I live in Amsterdam")
    results = m.query("where do I live?")
Custom embeddings
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("all-MiniLM-L6-v2")

m = Memory(user_id="alice", embedding_dims=384, embed_fn=model.encode)
Multi-user isolation
alice = Memory(user_id="alice")
bob   = Memory(user_id="bob")

alice.store("I work at Google")
bob.query("where does alice work?")  #> [] — fully isolated

Shared org memories:

alice = Memory(user_id="alice", org_id="acme")
alice.store("Our mission is to democratize AI", scope="shared")

bob = Memory(user_id="bob", org_id="acme")
bob.query("what is our mission?")  #> Returns shared memory
CLI
lore store "I work at Google"
lore query "where do I work?"
lore list
lore stats
lore forget --id <id>
lore serve --port 8420     # REST API
lore mcp                   # MCP server
REST API
pip install loremem-ai[api]
lore serve --port 8420

# Store
curl -X POST localhost:8420/memory \
  -H "Content-Type: application/json" \
  -d '{"user_id":"alice","text":"I prefer dark mode"}'

# Query
curl "localhost:8420/memory?user_id=alice&query=preferences"
Docker
docker build -t loremem -f docker/Dockerfile .
docker run -p 8420:8000 -v lore_data:/data loremem


Contributing

Contributions welcome. See CONTRIBUTING.md.

git clone https://github.com/loreMemory/loreMemory.git && cd loreMemory
pip install -e ".[dev]" && pytest tests/ -v

Security  ·  Changelog  ·  License

MIT — free for personal and commercial use.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

loremem_ai-1.0.4.tar.gz (90.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

loremem_ai-1.0.4-py3-none-any.whl (44.8 kB view details)

Uploaded Python 3

File details

Details for the file loremem_ai-1.0.4.tar.gz.

File metadata

  • Download URL: loremem_ai-1.0.4.tar.gz
  • Upload date:
  • Size: 90.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for loremem_ai-1.0.4.tar.gz
Algorithm Hash digest
SHA256 072248fcd4dcb1777e5b1b17ce6569c2c23ce05a6c5fcd03b2b0be3ef6929380
MD5 c0d259edaa3b621b2e69a1c528c96008
BLAKE2b-256 dac0d0577c4b7793082a11d8beda1785b8ffbade95a8546b3705c1fdddfb8456

See more details on using hashes here.

Provenance

The following attestation bundles were made for loremem_ai-1.0.4.tar.gz:

Publisher: publish.yml on loreMemory/loreMemory

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file loremem_ai-1.0.4-py3-none-any.whl.

File metadata

  • Download URL: loremem_ai-1.0.4-py3-none-any.whl
  • Upload date:
  • Size: 44.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for loremem_ai-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 3e761af80e2eaf6fa93492f024f51d94c2eca5f2429ce5e7e953aa752ae53077
MD5 7f902747cfc1ec0235813be3a135e2ee
BLAKE2b-256 716f8e76f41337924954d6d02a44afcdb125296f2e3c9c6a1d343229d10aa8f7

See more details on using hashes here.

Provenance

The following attestation bundles were made for loremem_ai-1.0.4-py3-none-any.whl:

Publisher: publish.yml on loreMemory/loreMemory

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page