Skip to main content

Official Python client for Engram — durable, explainable memory for AI agents.

Project description

lumetra-engram

Official Python client for Engram — durable, explainable memory for AI agents.

  • Zero runtime dependencies (uses the standard library's urllib).
  • Fully typed (py.typed, TypedDict response shapes, IDE-friendly).
  • Python 3.9+.

The TypeScript twin lives at lumetra-io/engram-js.

Install

pip install lumetra-engram
# or
uv add lumetra-engram
# or
poetry add lumetra-engram

Quickstart

from lumetra_engram import EngramClient

engram = EngramClient(api_key="eng_live_...")  # or set ENGRAM_API_KEY and omit

# Store a fact
engram.store_memory("User prefers dark mode.", "user-123")

# Recall — returns a synthesized answer plus the memories that contributed
result = engram.query(
    "What are this user's UI preferences?",
    buckets=["user-123"],
)

print(result["answer"])
print(result.get("explanation", {}).get("retrieved_memories", []))

Configuration

EngramClient(
    api_key="eng_live_...",            # or ENGRAM_API_KEY env var
    base_url="https://api.lumetra.io", # or ENGRAM_BASE_URL env var
    timeout_seconds=30.0,              # default 30s
)

BYOK reminder. Engram is bring-your-own-key end-to-end. Configure an OpenAI / Anthropic / Groq / Together / Fireworks key on the Lumetra portal before your first call, or store_memory / query will raise EngramError with status == 412.

API surface

Memories

  • store_memory(content, bucket="default") — store a single fact
  • store_memories(contents, bucket="default") — batched store
  • list_memories(bucket="default", *, limit=20, offset=0) — paginated list
  • delete_memory(memory_id, bucket="default") — delete one memory
  • clear_memories(bucket) — delete every memory in a bucket

Query

  • query(question, *, buckets=None, top_k=8, skip_synthesis=False, return_explanation=True)
    • buckets fuses across multiple buckets in one call
    • skip_synthesis=True returns retrieval-only — no server-side LLM call
    • response shape: {"answer", "explanation": {"retrieved_memories", "profile", "graph_facts"}, "usage"}

Buckets

  • list_buckets() — all buckets in your tenant
  • create_bucket(name, description=None)
  • delete_bucket(bucket)

Profile

  • get_profile(bucket="default") — the canonical profile prepended to recall
  • regenerate_profile(bucket="default") — rebuild from current memories

Errors

All non-2xx HTTP responses raise EngramError:

from lumetra_engram import EngramClient, EngramError

engram = EngramClient()

try:
    engram.store_memory("User prefers dark mode.", "user-123")
except EngramError as err:
    if err.status == 412:
        print("BYOK not configured — set an LLM provider key in the Lumetra portal.")
    elif err.status == 429:
        print("Rate limited — back off and retry.")
    else:
        print(f"Engram {err.status}: {err}")
        print("Body:", err.body)

err.status is the HTTP status (or 0 for connection failures), err.body is the parsed JSON body when one was returned.

Async usage

This client is synchronous. For async code, wrap calls in asyncio.to_thread:

import asyncio
from lumetra_engram import EngramClient

engram = EngramClient()

async def recall(question: str):
    return await asyncio.to_thread(engram.query, question, buckets=["user-123"])

A dedicated async client may land later; until then, the thread wrapper is the recommended pattern.

Type hints

Return shapes are declared as TypedDict in lumetra_engram.types. They behave as ordinary dict at runtime — JSON-serialize freely — but give mypy and pyright the same level of detail the TypeScript client exposes via interface.

from lumetra_engram import QueryResult

def summarize(result: QueryResult) -> str:
    return result.get("answer", "")

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lumetra_engram-0.1.0.tar.gz (6.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lumetra_engram-0.1.0-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file lumetra_engram-0.1.0.tar.gz.

File metadata

  • Download URL: lumetra_engram-0.1.0.tar.gz
  • Upload date:
  • Size: 6.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.13

File hashes

Hashes for lumetra_engram-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4a697bc35a66fdd644db21d91efd1a42d68c8b0458c46a9531898b8abd0d218b
MD5 3ea12024cd1feb201200667bab6300a5
BLAKE2b-256 2f5ecd9977929627756a7f61a000c63492802f35eb1f750ff0e76e0e9d02979a

See more details on using hashes here.

File details

Details for the file lumetra_engram-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for lumetra_engram-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7ed36938e93db2c4fc37e3de4e115edd901da80edda82cede4e64d3e368c33b6
MD5 d9f0ff6179baee2fd97001e336693ea4
BLAKE2b-256 d9e3ce8957522842249118f4ddb05c9d946d5f67bc326ea5cd4778bb259a3a42

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page