Skip to main content

Temporal knowledge graph memory system for AI agents

Project description

Memento

PyPI - Version PyPI - Python Version GitHub License GitHub Actions Workflow Status

Any model, same memory. A bitemporal knowledge graph (tracking when facts were true vs. when they were learned) that gives AI agents persistent, structured memory across LLM providers, clients, and conversations.

Most AI memory systems dump text into a vector store and retrieve by similarity. Memento builds a knowledge graph that resolves entities, detects contradictions, tracks time, and composes answers from structured relationships rather than raw chunks.

Works with any MCP-compatible client (Claude Desktop, Cursor, Claude Code, Cline, Windsurf, OpenClaw, Continue.dev) and any LLM backend (Claude, GPT, Gemini, Llama, Mistral, Ollama, or any OpenAI-compatible endpoint).

90.8% overall accuracy, 92.2% task average on LongMemEval (500 questions, GPT-4o judge) — a benchmark for long-term conversational memory covering temporal reasoning, knowledge updates, multi-session recall, and preference tracking.

Quick Start

MCP Server

pip install memento-memory[anthropic]
export ANTHROPIC_API_KEY=your-key
memento-mcp

Add to your MCP client config (e.g., Claude Desktop claude_desktop_config.json):

{
  "mcpServers": {
    "memento": {
      "command": "memento-mcp",
      "env": { "ANTHROPIC_API_KEY": "your-key" }
    }
  }
}

That's it. The agent now has persistent memory and calls memory_ingest to store facts and memory_recall to retrieve them. Every MCP client on the same machine shares the same knowledge graph.

Python API

from memento import MemoryStore

store = MemoryStore()

# Ingest — extracts entities, resolves against the graph, detects contradictions
store.ingest("John Smith is VP of Sales at Alpha Corp.")
store.ingest("Alpha Corp is acquiring Beta Inc.")

# Recall — graph traversal + ranking + context budgeting
memory = store.recall("What should I know about John?")
print(memory.text)
# ## John Smith (person)
# - title: VP of Sales
# - → [works_at] Alpha Corp
#
# ## Alpha Corp (organization)
# - → [acquiring] Beta Inc

# Point-in-time queries
memory = store.recall("Where was John in January?", as_of="2025-01-31T00:00:00Z")

# Direct manipulation
store.correct(entity_id, "title", "VP of Sales", reason="Promoted")
store.forget(entity_id=entity_id)
store.merge(entity_a_id, entity_b_id)

# Introspection
conflicts = store.conflicts()
health = store.health()
entities = store.entity_list()

# Privacy
export = store.export_entity_data(entity_id)
chain = store.audit_belief(entity_id, "title")
receipt = store.hard_delete(entity_id)

# Consolidation
store.consolidate()

# Session tracking (scratchpad with coreference)
session = store.start_session()
session.on_turn("I met John Smith today.")
session.on_turn("He mentioned a new project.")
session.end()  # Flushes through ingestion pipeline

LLM Providers

Memento is provider-agnostic. Swap the backend via config — no code changes.

Provider Install Config
Anthropic pip install memento-memory[anthropic] ANTHROPIC_API_KEY
OpenAI pip install memento-memory[openai] OPENAI_API_KEY, MEMENTO_LLM_PROVIDER=openai
Google Gemini pip install memento-memory[gemini] GOOGLE_API_KEY, MEMENTO_LLM_PROVIDER=gemini
Ollama (fully local) pip install memento-memory[openai] MEMENTO_LLM_PROVIDER=ollama
Any OpenAI-compatible pip install memento-memory[openai] MEMENTO_LLM_PROVIDER=openai-compatible, MEMENTO_LLM_BASE_URL=...

How It Works

Agent / LLM
  │ query              │ ingest
  ▼                    ▼
Retrieval Engine    Ingestion Pipeline
  │                    │
  ▼                    ▼
Bitemporal Knowledge Graph (SQLite)
  │
  ├── Consolidation Engine (decay, dedup, prune)
  ├── Verbatim Fallback (FTS5 + vector search)
  └── Privacy Layer (export, audit, hard delete)
  • Entity resolution — "John," "John Smith," and "the Alpha guy" become one node. Tiered matching: exact/fuzzy/phonetic (cheap) before embedding similarity and LLM tiebreaker (expensive).
  • Contradiction detection — flags when new facts conflict with existing ones
  • Bitemporal model — every fact tracks when it was true (valid time) and when the system learned it (transaction time)
  • Immutable history — facts are never deleted, only superseded. Full audit trail.
  • Verbatim fallback — raw text stored alongside the graph, so extraction loss doesn't mean information loss
  • Compositional retrieval — "What should I know before my meeting with John?" traverses the graph, not just retrieves chunks
  • Confidence decay — multiplicative decay prevents artificial confidence floors from repeated confirmations
  • Consolidation — background engine decays stale info, merges duplicates, prunes orphans

Benchmarks

90.8% overall accuracy on LongMemEval (500 questions across 6 categories):

Category Accuracy
Single-session (assistant) 98.2%
Single-session (user) 97.1%
Single-session (preference) 93.3%
Temporal reasoning 89.5%
Knowledge update 88.5%
Multi-session 86.5%
Task-averaged 92.2%

Full methodology and reproduction steps: BENCHMARKS.md

CLI

Admin and introspection tools for the knowledge graph:

memento entities                        # List all entities
memento entity <id>                     # Show entity details
memento history <id> <key>              # Property history over time
memento snapshot <id> --as-of 2025-06   # Point-in-time view
memento stats                           # Graph statistics
memento merge <id_a> <id_b>             # Merge two entities
memento consolidate                     # Run maintenance pass
memento export <id>                     # GDPR data export (JSON)
memento audit <id> <key>                # Trace a belief to its source
memento delete <id> --hard              # Hard delete with receipt

Configuration

Variable Default Description
MEMENTO_LLM_PROVIDER auto-detect anthropic, openai, gemini, ollama
MEMENTO_LLM_API_KEY API key (or use provider-specific env vars)
MEMENTO_LLM_BASE_URL For Ollama/vLLM endpoints
MEMENTO_DB_PATH ~/.memento/memento.db SQLite database path
MEMENTO_EMBEDDING_PROVIDER sentence-transformers sentence-transformers or openai
ANTHROPIC_API_KEY Anthropic-specific key
OPENAI_API_KEY OpenAI-specific key
GOOGLE_API_KEY Gemini-specific key

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memento_memory-0.1.0.tar.gz (81.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memento_memory-0.1.0-py3-none-any.whl (56.3 kB view details)

Uploaded Python 3

File details

Details for the file memento_memory-0.1.0.tar.gz.

File metadata

  • Download URL: memento_memory-0.1.0.tar.gz
  • Upload date:
  • Size: 81.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memento_memory-0.1.0.tar.gz
Algorithm Hash digest
SHA256 21c680e3cbcb163e9ae3fbfbf01dcbd0d65683fcec3f59d41ddb0ad776af9af4
MD5 12c53c6d36a50ce84777cfe780794de4
BLAKE2b-256 0072cec4a9d1de4cca18105ccd0ac0ec170abb337eaf6f0ff3a7adffc13e51e7

See more details on using hashes here.

Provenance

The following attestation bundles were made for memento_memory-0.1.0.tar.gz:

Publisher: publish.yml on shane-farkas/memento-memory

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memento_memory-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: memento_memory-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 56.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memento_memory-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a71970733f46ce7089ea31c754550bfc541436b6edadb85026a7d817a5e364e4
MD5 94f5d534736371608a19b523276993e2
BLAKE2b-256 9fcbe254177bc0aa12725efd6e8e3fdf2477debc934ce646b7de9153855928c9

See more details on using hashes here.

Provenance

The following attestation bundles were made for memento_memory-0.1.0-py3-none-any.whl:

Publisher: publish.yml on shane-farkas/memento-memory

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page