Skip to main content

Official Python SDK for MemoryModel API

Project description

memory-model

Official Python SDK for MemoryModel.

Installation

pip install memory-model

Quick Start

from memory_model import MemoryClient

client = MemoryClient(
    api_key="sk_live_...",
    default_end_user_id="user_123"
)

# Add a memory
client.add("Project deadline is Friday.")

# Search memories
results = client.search("When is the deadline?")
for memory in results:
    print(f"[{memory.similarity:.2f}] {memory.content}")

API Reference

Constructor

MemoryClient(
    api_key: str,                      # Required
    base_url: str = "https://api.memorymodel.dev",
    default_end_user_id: str = None,
    timeout: int = 30,                 # Seconds
    max_retries: int = 3,              # Retries for 5xx/429
    api_version: str = "v1"            # API version prefix
)

Methods

add(content, *, user_context=None, end_user_id=None)

response = client.add("Meeting moved to 3pm.", user_context="Calendar update")
print(response.job_id)

add_image(image_data, *, user_context=None, end_user_id=None)

import base64
with open("screenshot.png", "rb") as f:
    img_b64 = base64.b64encode(f.read()).decode()

response = client.add_image(img_b64, user_context="Error screenshot")

search(query, *, limit=5, strategy="centroid_aware", end_user_id=None)

results = client.search("What time is the meeting?", limit=3)

list(*, limit=50, memory_type=None, end_user_id=None)

memories = client.list(limit=10)

delete(memory_id, *, end_user_id=None)

client.delete("mem_abc123")

search_image(image_data, *, limit=5, end_user_id=None) — Visual Similarity Search

import base64

# Load and encode image
with open("query_image.jpg", "rb") as f:
    img_b64 = base64.b64encode(f.read()).decode()

# Find visually similar memories
results = client.search_image(img_b64, limit=5)
for memory in results:
    print(f"Found: {memory.id}")
    if hasattr(memory, 'source_doc'):
        print(f"  → From: {memory.source_doc}, Page: {memory.page_number}")

Note: Image search finds memories visually similar to the input image. Only memories ingested via add_image() will be matched.

upload_document(file_data, *, file_name="document.pdf", end_user_id=None) — Upload & Process PDF

Upload a PDF file directly. The file is sent as multipart/form-data, stored on MemoryModel's infrastructure, and automatically queued for processing.

# Read and upload a PDF
with open("contract.pdf", "rb") as f:
    result = client.upload_document(f.read(), file_name="contract.pdf")

print(result.storage_path)  # Where the file was stored
print(result.job_id)         # Processing job ID

This method:

  1. Uploads the PDF to MemoryModel's storage (max 20MB)
  2. Automatically queues server-side processing
  3. Extracts text from each page and creates linked memories

add_document(storage_path, *, end_user_id=None) — Process Pre-Uploaded PDF

If the PDF is already in MemoryModel storage (e.g. uploaded via console), trigger processing by path:

client.add_document("projects/my-project/docs/contract.pdf")

Tip: Most users should use upload_document(). Use add_document() only if the file is already on our storage (e.g. uploaded via the Console UI).


Error Handling

from memory_model import MemoryClient, MemoryClientError

try:
    client.search("test")
except MemoryClientError as e:
    print(e.message)       # Human-readable error
    print(e.status)        # HTTP status (401, 429, 500...)
    print(e.code)          # Error code from API
    print(e.is_retryable)  # True for 5xx/429

Automatic Retry: The SDK retries on 5xx/429 with exponential backoff (1s → 2s → 4s).


Agent Integration

LangChain / CrewAI Example

from memory_model import MemoryClient
from langchain_openai import ChatOpenAI
from langchain.schema import SystemMessage, HumanMessage

memory = MemoryClient(api_key="sk_live_...", default_end_user_id="user_123")
llm = ChatOpenAI(model="gpt-4o")

def agent_respond(user_message: str) -> str:
    # 1. Store user message
    memory.add(user_message, user_context="User question")
    
    # 2. Retrieve context
    context = memory.search(user_message, limit=3)
    context_str = "\n".join([f"- {m.content}" for m in context])
    
    # 3. Generate response
    messages = [
        SystemMessage(content=f"Context:\n{context_str}"),
        HumanMessage(content=user_message)
    ]
    response = llm.invoke(messages)
    
    return response.content

Image-Based Search Example

import base64
from memory_model import MemoryClient

memory = MemoryClient(api_key="sk_live_...", default_end_user_id="user_123")

def find_similar_images(image_path: str):
    # 1. Load and encode query image
    with open(image_path, "rb") as f:
        img_b64 = base64.b64encode(f.read()).decode()
    
    # 2. Search for visually similar memories
    results = memory.search_image(img_b64, limit=5)
    
    # 3. Results may include source document info
    for mem in results:
        print(f"Found: {mem.id}")
        if hasattr(mem, 'source_doc'):
            print(f"  → From: {mem.source_doc}, Page: {mem.page_number}")
    
    return results

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memory_model-0.5.0.tar.gz (5.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memory_model-0.5.0-py3-none-any.whl (7.6 kB view details)

Uploaded Python 3

File details

Details for the file memory_model-0.5.0.tar.gz.

File metadata

  • Download URL: memory_model-0.5.0.tar.gz
  • Upload date:
  • Size: 5.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for memory_model-0.5.0.tar.gz
Algorithm Hash digest
SHA256 c63eb4f50b1d5dde0081f8bd4e9e2d86d3413898442eaa782092acf62d2ab567
MD5 9f76671388c50faf2be7275a114bf4e4
BLAKE2b-256 b1028f54de587ff9ce14a344f86ff4ebaee453e4f28c6f5e1cd7a16ec4fe9f1b

See more details on using hashes here.

File details

Details for the file memory_model-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: memory_model-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 7.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for memory_model-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 40ee67113dc85912c8a0be0df367266aa4930c768746e608a2cfea9f05078579
MD5 e25df90f5403298f1fd61188850e7383
BLAKE2b-256 b2649ad544f7195828f4c5c3c4a4277243db4cb4a4dfc02f7121f0e2f53ff357

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page