Skip to main content

Official Python client for Engram — durable, explainable memory for AI agents.

Project description

lumetra-engram

Official Python client for Engram — durable, explainable memory for AI agents.

  • Zero runtime dependencies (uses the standard library's urllib).
  • Fully typed (py.typed, TypedDict response shapes, IDE-friendly).
  • Python 3.9+.

The TypeScript twin lives at lumetra-io/engram-js.

Install

pip install lumetra-engram
# or
uv add lumetra-engram
# or
poetry add lumetra-engram

Quickstart

from lumetra_engram import EngramClient

engram = EngramClient(api_key="eng_live_...")  # or set ENGRAM_API_KEY and omit

# Store a fact
engram.store_memory("User prefers dark mode.", "user-123")

# Recall — returns a synthesized answer plus the memories that contributed
result = engram.query(
    "What are this user's UI preferences?",
    buckets=["user-123"],
)

print(result["answer"])
print(result.get("explanation", {}).get("retrieved_memories", []))

Configuration

EngramClient(
    api_key="eng_live_...",            # or ENGRAM_API_KEY env var
    base_url="https://api.lumetra.io", # or ENGRAM_BASE_URL env var
    timeout_seconds=30.0,              # default 30s
)

BYOK reminder. Engram is bring-your-own-key end-to-end. Configure an OpenAI / Anthropic / Groq / Together / Fireworks key on the Lumetra portal before your first call, or store_memory / query will raise EngramError with status == 412.

API surface

Memories

  • store_memory(content, bucket="default") — store a single fact
  • store_memories(contents, bucket="default") — batched store
  • list_memories(bucket="default", *, limit=20, offset=0) — paginated list
  • delete_memory(memory_id, bucket="default") — delete one memory
  • clear_memories(bucket) — delete every memory in a bucket. No default — explicit bucket required (prevents accidental wipes).

Query

  • query(question, *, buckets=None, top_k=8, skip_synthesis=False, return_explanation=True)
    • buckets fuses across multiple buckets in one call. Defaults to ["default"].
    • skip_synthesis=True returns retrieval-only — no server-side LLM call
    • response shape: {"answer", "explanation": {"retrieved_memories", "profile", "graph_facts"}, "usage"}

Buckets

  • list_buckets() — all buckets in your tenant
  • create_bucket(name, description=None)
  • delete_bucket(bucket)No default — explicit bucket required (prevents accidental wipes).

Profile

  • get_profile(bucket="default") — the canonical profile prepended to recall
  • regenerate_profile(bucket="default") — rebuild from current memories

Errors

All non-2xx HTTP responses raise EngramError:

from lumetra_engram import EngramClient, EngramError

engram = EngramClient()

try:
    engram.store_memory("User prefers dark mode.", "user-123")
except EngramError as err:
    if err.status == 412:
        print("BYOK not configured — set an LLM provider key in the Lumetra portal.")
    elif err.status == 429:
        print("Rate limited — back off and retry.")
    else:
        print(f"Engram {err.status}: {err}")
        print("Body:", err.body)

err.status is the HTTP status (or 0 for connection failures), err.body is the parsed JSON body when one was returned.

Async usage

This client is synchronous. For async code, wrap calls in asyncio.to_thread:

import asyncio
from lumetra_engram import EngramClient

engram = EngramClient()

async def recall(question: str):
    return await asyncio.to_thread(engram.query, question, buckets=["user-123"])

A dedicated async client may land later; until then, the thread wrapper is the recommended pattern.

Type hints

Return shapes are declared as TypedDict in lumetra_engram.types. They behave as ordinary dict at runtime — JSON-serialize freely — but give mypy and pyright the same level of detail the TypeScript client exposes via interface.

from lumetra_engram import QueryResult

def summarize(result: QueryResult) -> str:
    return result.get("answer", "")

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lumetra_engram-0.1.1.tar.gz (7.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lumetra_engram-0.1.1-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file lumetra_engram-0.1.1.tar.gz.

File metadata

  • Download URL: lumetra_engram-0.1.1.tar.gz
  • Upload date:
  • Size: 7.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.2

File hashes

Hashes for lumetra_engram-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a4a3e48b71a8319bebb677a2a1cd16d6ca149bdec71b48a0214ba544c526ff43
MD5 66ea58e3575f4759d02b4807ff0c6bf6
BLAKE2b-256 a97ad33a607bc6f8c4c8967b8d59dbf2a03e7f2726b26d9991eb1289678c6c7f

See more details on using hashes here.

File details

Details for the file lumetra_engram-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: lumetra_engram-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 9.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.2

File hashes

Hashes for lumetra_engram-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9a5f1acfce7048c1149ce3fcc4e2144ff4802dd263ae130e1e9adbae90278628
MD5 1f58af725dcd3a10d0aa6d4ddaa2377b
BLAKE2b-256 d0cb82afbc41020e23d97254d2a940d2729dcc365ba635d36832d28830938c16

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page