Python SDK for MemoryLayer.ai - memory infrastructure for AI agents
Project description
MemoryLayer.ai Python SDK
Python SDK for MemoryLayer.ai - Memory infrastructure for AI agents.
Installation
pip install memorylayer-client
Quick Start
from memorylayer import MemoryLayerClient, MemoryType
async with MemoryLayerClient(
base_url="http://localhost:61001",
api_key="your-api-key", # Optional for local development
workspace_id="my-workspace"
) as client:
# Store a memory
memory = await client.remember(
content="User prefers Python for backend development",
type=MemoryType.SEMANTIC,
importance=0.8,
tags=["preferences", "programming"]
)
# Search memories
results = await client.recall(
query="what programming language does the user prefer?",
limit=5
)
for memory in results.memories:
print(f"{memory.content} (relevance: {memory.importance})")
# Synthesize memories
reflection = await client.reflect(
query="summarize user's technology preferences"
)
print(reflection.reflection)
Features
- Simple, Pythonic API - Async/await support with context managers
- Type-safe - Full type hints with Pydantic models
- Memory Operations - Remember, recall, reflect, forget, decay
- Relationship Graph - Link memories with typed relationships
- Session Management - Working memory with TTL and commit
- Batch Operations - Bulk create, update, delete
- Error Handling - Comprehensive exception hierarchy
Core Operations
Remember (Store Memory)
memory = await client.remember(
content="User prefers FastAPI over Flask",
type=MemoryType.SEMANTIC,
subtype=MemorySubtype.PREFERENCE,
importance=0.8,
tags=["preferences", "frameworks"],
metadata={"source": "conversation"}
)
Recall (Search Memories)
from memorylayer import RecallMode, SearchTolerance
results = await client.recall(
query="what frameworks does the user prefer?",
types=[MemoryType.SEMANTIC],
mode=RecallMode.RAG, # Active mode: vector similarity + graph traversal
limit=10,
min_relevance=0.7,
tolerance=SearchTolerance.MODERATE,
include_associations=True # Include related memories via graph traversal
)
# Note: LLM and Hybrid modes are deprecated. Use Context Environment's
# context_rlm() for LLM-powered analysis instead.
Reflect (Synthesize Memories)
reflection = await client.reflect(
query="summarize everything about the user's development workflow",
detail_level="standard", # "brief", "standard", or "detailed"
include_sources=True
)
print(reflection.reflection)
Associate (Link Memories)
from memorylayer import RelationshipType
association = await client.associate(
source_id="mem_problem_123",
target_id="mem_solution_456",
relationship=RelationshipType.SOLVES,
strength=0.9
)
Decay (Reduce Importance)
# Reduce memory importance over time
decayed = await client.decay("mem_123", decay_rate=0.1)
Trace (Memory Provenance)
# Get memory origin and association chain
trace = await client.trace_memory("mem_123")
print(trace["chain"])
Batch Operations
# Perform multiple operations in one request
results = await client.batch_memories([
{"type": "create", "data": {"content": "Memory 1", "importance": 0.7}},
{"type": "create", "data": {"content": "Memory 2", "importance": 0.8}},
{"type": "delete", "data": {"memory_id": "mem_old", "hard": False}}
])
print(f"Successful: {results['successful']}, Failed: {results['failed']}")
Session Management
Sessions provide working memory with TTL that can be committed to long-term storage.
# Create session (auto-creates workspace if needed)
session = await client.create_session(
ttl_seconds=3600,
workspace_id="my-workspace"
)
# Store working memory
await client.set_context(
session.id,
"current_task",
{"description": "Debugging auth", "file": "auth.py"}
)
# Retrieve working memory
context = await client.get_context(session.id, ["current_task"])
# Extend session TTL
await client.touch_session(session.id)
# Commit working memory to long-term storage
result = await client.commit_session(
session.id,
min_importance=0.5,
deduplicate=True
)
print(f"Created {result['memories_created']} memories")
# Delete session
await client.delete_session(session.id)
Session Briefing
briefing = await client.get_briefing(lookback_minutes=1440)
print(briefing.recent_activity)
Context Environment
The Context Environment provides server-side Python execution for advanced memory analysis. Execute code, load memories into variables, and use LLM-powered reasoning.
Note: Context Environment operations require an active session. Call set_session() first.
Execute Python Code
# Set active session
client.set_session(session.id)
# Execute code in sandbox
await client.context_exec("import pandas as pd")
await client.context_exec("data = [1, 2, 3, 4, 5]")
# Execute and get result
result = await client.context_exec("sum(data)")
print(result["result"]) # 15
Load Memories into Sandbox
# Load memories as a variable
await client.context_load(
var="preferences",
query="user preferences",
limit=20,
min_relevance=0.7
)
# Inspect loaded data
state = await client.context_inspect("preferences")
print(state["type"], state["preview"])
Query with LLM
# Ask LLM to analyze sandbox variables
result = await client.context_query(
prompt="Summarize the user's preferences and find patterns",
variables=["preferences"]
)
print(result["response"])
Recursive Language Model (RLM)
# Run autonomous reasoning loop
result = await client.context_rlm(
goal="Analyze coding preferences and identify contradictions",
memory_query="coding preferences",
max_iterations=10,
detail_level="detailed"
)
print(result["result"])
Inject Values
# Inject data into sandbox
await client.context_inject(
key="config",
value={"debug": True, "max_retries": 3}
)
Status and Cleanup
# Check sandbox status
status = await client.context_status()
print(f"Variables: {status['variable_count']}")
# Checkpoint state (for enterprise persistence)
await client.context_checkpoint()
# Clean up sandbox
await client.context_cleanup()
Workspace Management
# Create workspace
workspace = await client.create_workspace("my-project")
# Get workspace
workspace = await client.get_workspace("ws_123")
# Update workspace
workspace = await client.update_workspace(
"ws_123",
name="New Name",
settings={"key": "value"}
)
# Get workspace schema (relationship types, memory subtypes)
schema = await client.get_workspace_schema("ws_123")
print(schema["relationship_types"])
Memory Types
Cognitive Types
- Episodic - Specific events/interactions
- Semantic - Facts, concepts, relationships
- Procedural - How to do things
- Working - Current task context (session-scoped)
Domain Subtypes
- Solution - Working fixes to problems
- Problem - Issues encountered
- Code Pattern - Reusable patterns
- Fix - Bug fixes with context
- Error - Error patterns and resolutions
- Workflow - Process knowledge
- Preference - User/project preferences
- Decision - Architectural decisions
- Directive - User instructions/constraints
Relationship Types
Link memories with typed relationships organized into 11 categories. The SDK supports 60+ relationship types:
from memorylayer import RelationshipType
# Causal (4 types)
RelationshipType.CAUSES
RelationshipType.TRIGGERS
RelationshipType.LEADS_TO
RelationshipType.PREVENTS
# Solution (4 types)
RelationshipType.SOLVES
RelationshipType.ADDRESSES
RelationshipType.ALTERNATIVE_TO
RelationshipType.IMPROVES
# Learning (4 types)
RelationshipType.BUILDS_ON
RelationshipType.CONTRADICTS
RelationshipType.CONFIRMS
RelationshipType.SUPERSEDES
# Similarity (3 types)
RelationshipType.SIMILAR_TO
RelationshipType.VARIANT_OF
RelationshipType.RELATED_TO
# Workflow (4 types)
RelationshipType.FOLLOWS
RelationshipType.DEPENDS_ON
RelationshipType.ENABLES
RelationshipType.BLOCKS
# Quality (3 types)
RelationshipType.EFFECTIVE_FOR
RelationshipType.PREFERRED_OVER
RelationshipType.DEPRECATED_BY
# Context (4 types)
RelationshipType.OCCURS_IN
RelationshipType.APPLIES_TO
RelationshipType.WORKS_WITH
RelationshipType.REQUIRES
# ... and more categories (11 total)
Error Handling
from memorylayer import (
MemoryLayerError,
AuthenticationError,
NotFoundError,
ValidationError,
RateLimitError,
ServerError
)
try:
memory = await client.get_memory("mem_123")
except NotFoundError:
print("Memory not found")
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Validation error: {e}")
except RateLimitError:
print("Rate limit exceeded")
except ServerError as e:
print(f"Server error: {e.status_code}")
except MemoryLayerError as e:
print(f"MemoryLayer error: {e}")
Configuration
client = MemoryLayerClient(
base_url="http://localhost:61001", # Default
api_key="your-api-key", # Optional for local dev
workspace_id="my-workspace", # Default workspace
session_id="sess_123", # Optional active session
timeout=30.0 # Request timeout in seconds
)
Development
Install Development Dependencies
pip install -e ".[dev]"
Run Tests
pytest
Type Checking
mypy src/memorylayer
Linting
ruff check src/memorylayer
ruff format src/memorylayer
License
Apache 2.0 License -- see LICENSE for details.
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memorylayer_client-0.0.5.tar.gz.
File metadata
- Download URL: memorylayer_client-0.0.5.tar.gz
- Upload date:
- Size: 24.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a83bec7c8e5665cb9cb912547c109c97a4f23716dd00cdd137166ed4e8059db4
|
|
| MD5 |
b6b867105146a672aa1c6e955f4af3fa
|
|
| BLAKE2b-256 |
cf9f99bac62a216bff3a44213a09a479faf75ceb0db6bc2bce3eb4a29597db6c
|
Provenance
The following attestation bundles were made for memorylayer_client-0.0.5.tar.gz:
Publisher:
release.yml on scitrera/memorylayer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorylayer_client-0.0.5.tar.gz -
Subject digest:
a83bec7c8e5665cb9cb912547c109c97a4f23716dd00cdd137166ed4e8059db4 - Sigstore transparency entry: 953455432
- Sigstore integration time:
-
Permalink:
scitrera/memorylayer@a491d8554474fcac58dbc4f28be277c053d29f52 -
Branch / Tag:
refs/tags/v0.0.5 - Owner: https://github.com/scitrera
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a491d8554474fcac58dbc4f28be277c053d29f52 -
Trigger Event:
push
-
Statement type:
File details
Details for the file memorylayer_client-0.0.5-py3-none-any.whl.
File metadata
- Download URL: memorylayer_client-0.0.5-py3-none-any.whl
- Upload date:
- Size: 4.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f1280d4fc0937eea2f55efdb9f450eecf63516f04586c71142c2bee36e98f57
|
|
| MD5 |
def8e57eb456cd9179cf9d88299118e3
|
|
| BLAKE2b-256 |
2c436ad8b0fffe78cdd943871929dbda912c0057b49139a211644e8fdaefd466
|
Provenance
The following attestation bundles were made for memorylayer_client-0.0.5-py3-none-any.whl:
Publisher:
release.yml on scitrera/memorylayer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorylayer_client-0.0.5-py3-none-any.whl -
Subject digest:
3f1280d4fc0937eea2f55efdb9f450eecf63516f04586c71142c2bee36e98f57 - Sigstore transparency entry: 953455435
- Sigstore integration time:
-
Permalink:
scitrera/memorylayer@a491d8554474fcac58dbc4f28be277c053d29f52 -
Branch / Tag:
refs/tags/v0.0.5 - Owner: https://github.com/scitrera
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a491d8554474fcac58dbc4f28be277c053d29f52 -
Trigger Event:
push
-
Statement type: