Local-first personal knowledge graph for AI agents — ontology, causal chains, contradiction detection
Project description
🌳 lorien
Local-first personal knowledge graph for AI agents.
What to believe, why, and what conflicts — structured memory that Mem0 can't do.
pip install lorien-memory # core (KuzuDB + CLI)
pip install "lorien-memory[vectors]" # + semantic search
Why lorien?
| Feature | Mem0 | lorien |
|---|---|---|
| Conversation memory | ✅ | ✅ |
| Semantic vector search | ✅ | ✅ |
| Local (no server) | ❌ | ✅ |
| Cost | $249/mo | $0 |
| Priority rule system | ❌ | ✅ |
| Causal reasoning (CAUSED) | ❌ | ✅ |
| Auto contradiction detection | ❌ | ✅ |
lorien stores structured knowledge — not just flat strings. Every fact has a source, every rule has a priority, and contradictions are detected automatically.
Quickstart
from lorien import LorienMemory
mem = LorienMemory(enable_vectors=True)
# Add a conversation
mem.add([
{"role": "user", "content": "I have a severe shellfish allergy. Oysters send me to the ER."},
{"role": "assistant", "content": "Noted — I'll never recommend shellfish."},
], user_id="alice")
# 3 months later — new conversation
mem.add([
{"role": "user", "content": "Where should I eat tonight?"},
{"role": "assistant", "content": "The new oyster bar on Main St is great!"},
], user_id="alice")
# Semantic search — finds allergy even without exact keywords
results = mem.search("seafood restrictions", user_id="alice")
# → [{"memory": "User has severe shellfish allergy...", "score": 0.82}]
# Auto-detected contradiction
contradictions = mem.get_contradictions()
# → [{"fact_a": "shellfish allergy...", "fact_b": "oyster bar recommendation..."}]
# Hard rules with priority
rules = mem.get_entity_rules("alice")
# → [{"text": "Never recommend shellfish to alice", "priority": 100}]
Schema
lorien uses KuzuDB — an embedded graph database (like SQLite, but for graphs).
Entity ─── HAS_RULE ───► Rule
│
ABOUT
│
▼
Fact ─── CAUSED ──► Fact
│
CONTRADICTS
│
▼
Fact
3 node types:
- Entity — people, organizations, topics (
canonical_key = "type:name") - Fact — statements about entities (subject → predicate → object)
- Rule — constraints with priority 0–100 (100 = absolute prohibition)
5 edge types: ABOUT, HAS_RULE, RELATED_TO, CAUSED, CONTRADICTS
CLI
# Initialize
lorien init
# Check status
lorien status
# Ingest a file (MEMORY.md, notes, etc.)
lorien ingest MEMORY.md
lorien ingest MEMORY.md --model haiku # LLM extraction via OpenClaw
# Query the graph
lorien query "MATCH (e:Entity) RETURN e.name LIMIT 10"
# Show entity details
lorien show "alice"
# List contradictions
lorien contradictions
# Conversation memory for a user
lorien memory alice
# Web visualization (vis.js, no extra deps)
lorien serve
Contradiction Detection
After every fact is ingested, lorien automatically checks for semantic contradictions:
- Vector similarity — find facts with similar meaning (threshold 0.55)
- Heuristic check — negation pair patterns (허용↔금지, always↔never, must↔must not, ...)
- LLM confirmation (optional) — yes/no question to any OpenAI-compatible model
- CONTRADICTS edge — auto-created in the graph for later querying
detector = ContradictionDetector(
store=store,
vector_index=vi,
llm_model="gpt-4o-mini",
api_key="sk-...",
similarity_threshold=0.55,
)
n = detector.check_and_record(new_fact_id, new_fact_text)
OpenClaw Integration
lorien auto-detects the OpenClaw gateway when available:
lorien ingest MEMORY.md --model haiku # routes through OpenClaw → Anthropic
lorien ingest notes.md --model flash # routes through OpenClaw → Gemini
No API key needed when OpenClaw gateway is running locally.
Installation
# Core only (graph + CLI, no LLM, no vectors)
pip install lorien-memory
# With semantic search
pip install "lorien-memory[vectors]"
# With OpenAI-compatible LLM extraction
pip install "lorien-memory[llm]"
# Everything
pip install "lorien-memory[all]"
Requirements: Python 3.12+, no server, no Docker.
DB stored at ~/.lorien/db. Vectors at ~/.lorien/vectors.db.
Roadmap
- v0.1 — Core graph schema (Entity, Fact, Rule + 5 edge types)
- v0.1 — LLM ingest via OpenClaw gateway
- v0.1 — Mem0-compatible
LorienMemoryAPI - v0.2 — Vector semantic search (sentence-transformers, multilingual)
- v0.2 — Automatic contradiction detection
- v0.2 — PyPI release (
pip install lorien-memory) - v1.0 — Web graph visualization
- v1.0 — LangChain adapter
vs Mem0
Mem0 answers: "What did the user say?"
lorien answers: "What should I believe, why, and does it contradict anything?"
Use both. They're complementary, not competitors.
MIT License · GitHub
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lorien_memory-0.2.0.tar.gz.
File metadata
- Download URL: lorien_memory-0.2.0.tar.gz
- Upload date:
- Size: 29.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
216b1f8d2418a174628e9603a8788df58f362331a19c3cee062af52a6fc0bffb
|
|
| MD5 |
6224baeac6d09562d537f458c5cbed37
|
|
| BLAKE2b-256 |
380e258601225d76379377c58b437945723bfc5057b71fa3870b88369217b19e
|
File details
Details for the file lorien_memory-0.2.0-py3-none-any.whl.
File metadata
- Download URL: lorien_memory-0.2.0-py3-none-any.whl
- Upload date:
- Size: 30.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6f02f21bb1e68d39755fe1836c8b70b3c6c7120422cc7c1c0f86a2fccfde1346
|
|
| MD5 |
b1af01efa8bc90c130474cb53977720f
|
|
| BLAKE2b-256 |
85fddb0be31ee6a86f9bd6f5625f6463eea5f5e2a87936d0fdca0e9f5dc39494
|