MemoryLayer.ai - API-first memory infrastructure for LLM-powered agents (open source core)
Project description
MemoryLayer.ai Server
API-first memory infrastructure for LLM-powered agents.
MemoryLayer provides cognitive memory capabilities for AI agents, including episodic, semantic, procedural, and working memory with vector-based retrieval, graph-based associations, and server-side computation sandboxes.
Features
- Cognitive Memory Architecture — Episodic, semantic, procedural, and working memory types
- Vector Search — SQLite with sqlite-vec for efficient similarity search
- Knowledge Graph — 60+ relationship types organized into 11 categories for memory associations
- Context Environment — Server-side Python sandboxes for memory analysis and computation
- Session Management — Working memory with TTL and commit to long-term storage
- REST API — Full-featured HTTP API for all memory operations
- Multiple Embedding Providers — OpenAI, Google GenAI, sentence-transformers (local), and mock (testing)
- Health Endpoints —
/healthand/health/readyfor monitoring and readiness checks
Installation
# Basic installation
pip install memorylayer-server
# With OpenAI embeddings
pip install memorylayer-server[openai]
# With Google GenAI embeddings
pip install memorylayer-server[google]
# With local embeddings (sentence-transformers)
pip install memorylayer-server[local]
# All embedding providers
pip install memorylayer-server[all]
Package name: memorylayer-server (PyPI)
Import name: memorylayer_server
Quick Start
Start the HTTP Server
# Start on default port (61001)
memorylayer serve
# Custom port
memorylayer serve --port 8080
# Bind to all interfaces
memorylayer serve --host 0.0.0.0
# Debug mode
memorylayer serve --verbose
Docker
The official Docker image comes with all optional dependencies pre-installed and defaults to local embeddings (no API key required):
docker run -d \
--name memorylayer \
-p 61001:61001 \
-v memorylayer-data:/data \
scitrera/memorylayer-server
With OpenAI embeddings:
docker run -d \
--name memorylayer \
-p 61001:61001 \
-v memorylayer-data:/data \
-e MEMORYLAYER_EMBEDDING_PROVIDER=openai \
-e MEMORYLAYER_EMBEDDING_OPENAI_API_KEY=sk-... \
scitrera/memorylayer-server
API Usage
The server exposes a REST API. Use any HTTP client, or install the Python SDK (pip install memorylayer-client) for a typed client:
from memorylayer import MemoryLayerClient
async with MemoryLayerClient(base_url="http://localhost:61001") as client:
# Store a memory
memory = await client.remember(
content="User prefers Python for backend development",
type="semantic",
importance=0.8,
tags=["preferences", "programming"]
)
# Recall memories
results = await client.recall(
query="What programming languages does the user like?",
limit=5
)
# Create associations
await client.associate(
source_id=memory.id,
target_id=other_memory.id,
relationship="related_to",
strength=0.9
)
Configuration
Environment Variables
| Variable | Default | Description |
|---|---|---|
MEMORYLAYER_SERVER_HOST |
127.0.0.1 |
Server bind address |
MEMORYLAYER_SERVER_PORT |
61001 |
Server port |
MEMORYLAYER_DATA_DIR |
~/.config/memorylayer-server |
Data directory |
MEMORYLAYER_SQLITE_STORAGE_PATH |
memorylayer.db |
SQLite database path (relative to data dir) |
MEMORYLAYER_EMBEDDING_PROVIDER |
local |
Embedding provider (openai, google, local, mock) |
MEMORYLAYER_EMBEDDING_OPENAI_API_KEY |
— | OpenAI API key |
MEMORYLAYER_EMBEDDING_GOOGLE_API_KEY |
— | Google API key |
Embedding Providers
Local (sentence-transformers) — Default provider, no API key required:
pip install memorylayer-server[local]
export MEMORYLAYER_EMBEDDING_PROVIDER=local
memorylayer serve
OpenAI:
pip install memorylayer-server[openai]
export MEMORYLAYER_EMBEDDING_PROVIDER=openai
export MEMORYLAYER_EMBEDDING_OPENAI_API_KEY=sk-...
memorylayer serve
Google GenAI:
pip install memorylayer-server[google]
export MEMORYLAYER_EMBEDDING_PROVIDER=google
export MEMORYLAYER_EMBEDDING_GOOGLE_API_KEY=...
memorylayer serve
Mock (testing only):
export MEMORYLAYER_EMBEDDING_PROVIDER=mock
memorylayer serve
LLM Provider (Optional)
Some features (reflection, smart extraction, context environment queries) require an LLM provider configured via profiles:
# OpenAI
export MEMORYLAYER_LLM_PROFILE_DEFAULT_PROVIDER=openai
export MEMORYLAYER_LLM_PROFILE_DEFAULT_API_KEY=sk-...
# Anthropic Claude
export MEMORYLAYER_LLM_PROFILE_DEFAULT_PROVIDER=anthropic
export MEMORYLAYER_LLM_PROFILE_DEFAULT_API_KEY=sk-ant-...
# Google Gemini
export MEMORYLAYER_LLM_PROFILE_DEFAULT_PROVIDER=google
export MEMORYLAYER_LLM_PROFILE_DEFAULT_API_KEY=...
Profile configuration variables (replace DEFAULT with any profile name):
| Variable | Description |
|---|---|
MEMORYLAYER_LLM_PROFILE_<NAME>_PROVIDER |
Provider (openai, anthropic, google) |
MEMORYLAYER_LLM_PROFILE_<NAME>_API_KEY |
API key |
MEMORYLAYER_LLM_PROFILE_<NAME>_MODEL |
Model name override |
MEMORYLAYER_LLM_PROFILE_<NAME>_BASE_URL |
Custom API base URL |
MEMORYLAYER_LLM_PROFILE_<NAME>_MAX_TOKENS |
Max response tokens |
MEMORYLAYER_LLM_PROFILE_<NAME>_TEMPERATURE |
Sampling temperature |
Without an LLM provider, core memory operations (remember, recall, forget, associate) work normally, but synthesis features will be unavailable.
Context Environment
The Context Environment provides server-side Python sandboxes for memory analysis and computation. See Context Environment documentation for details.
Configuration:
| Variable | Default | Description |
|---|---|---|
MEMORYLAYER_CONTEXT_EXECUTOR |
smolagents |
Executor backend (smolagents or restricted) |
MEMORYLAYER_CONTEXT_MAX_EXEC_SECONDS |
30 |
Timeout per code execution |
MEMORYLAYER_CONTEXT_MAX_OUTPUT_CHARS |
50000 |
Max captured stdout characters |
MEMORYLAYER_CONTEXT_QUERY_MAX_TOKENS |
4096 |
Max tokens for server-side LLM queries |
MEMORYLAYER_CONTEXT_MAX_MEMORY_BYTES |
268435456 |
Memory limit per sandbox (256 MB) |
MEMORYLAYER_CONTEXT_RLM_MAX_ITERATIONS |
10 |
Max iterations for RLM loops |
MEMORYLAYER_CONTEXT_RLM_MAX_EXEC_SECONDS |
120 |
Total timeout for RLM loops |
MEMORYLAYER_CONTEXT_MAX_OPERATIONS |
1000000 |
Max operations per sandbox execution |
Storage
The default storage backend is SQLite with sqlite-vec for vector operations. The database file defaults to ~/.config/memorylayer-server/memorylayer.db and contains all memories, embeddings, associations, and session data.
Override the data directory:
export MEMORYLAYER_DATA_DIR=/var/lib/memorylayer
Override the database path:
export MEMORYLAYER_SQLITE_STORAGE_PATH=/var/lib/memorylayer/data.db
Recall Modes
The active recall mode is RAG (vector similarity + graph traversal). LLM and Hybrid modes are deprecated.
MCP Integration
The Model Context Protocol (MCP) server is a separate TypeScript package (@scitrera/memorylayer-mcp-server), not part of this Python server CLI.
To use MemoryLayer with Claude Code or Claude Desktop:
- Start the HTTP server:
memorylayer serve - Install and configure the MCP server:
npm install -g @scitrera/memorylayer-mcp-server
See the MCP Server documentation for setup instructions.
Health Checks
GET /health— Basic health check (returns immediately)GET /health/ready— Readiness check (verifies storage connectivity)
The Docker image includes a built-in health check at /health (every 30s, 10s startup grace period).
Documentation
- Website: https://memorylayer.ai
- Docs: https://docs.memorylayer.ai
- GitHub: https://github.com/scitrera/memorylayer
License
Apache 2.0 License -- see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memorylayer_server-0.0.4.tar.gz.
File metadata
- Download URL: memorylayer_server-0.0.4.tar.gz
- Upload date:
- Size: 162.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec56f100d6dacc0f43f1c2a044f6efc6489b19155570f7d7f7e3fcd8384ac86a
|
|
| MD5 |
72185f29338361510724d1de5003d8d2
|
|
| BLAKE2b-256 |
23802098815d7bab719d2a2f26ea0806713df9a49eeb0c03bc5425f14e84c055
|
Provenance
The following attestation bundles were made for memorylayer_server-0.0.4.tar.gz:
Publisher:
release.yml on scitrera/memorylayer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorylayer_server-0.0.4.tar.gz -
Subject digest:
ec56f100d6dacc0f43f1c2a044f6efc6489b19155570f7d7f7e3fcd8384ac86a - Sigstore transparency entry: 953344137
- Sigstore integration time:
-
Permalink:
scitrera/memorylayer@3b3d37a2a74f83848316b5ace5c132c156eef7d1 -
Branch / Tag:
refs/tags/v0.0.4 - Owner: https://github.com/scitrera
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3b3d37a2a74f83848316b5ace5c132c156eef7d1 -
Trigger Event:
push
-
Statement type:
File details
Details for the file memorylayer_server-0.0.4-py3-none-any.whl.
File metadata
- Download URL: memorylayer_server-0.0.4-py3-none-any.whl
- Upload date:
- Size: 241.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
994e4380d066abaced7d7729ed2374a15fe915f47d2b94aabefd5a9d2318cf98
|
|
| MD5 |
ab2ed4c2ee62f64a3dcefb7b73425b76
|
|
| BLAKE2b-256 |
79ab0d1eb9ef923c39ae59e9ebd0ab0515987812bd87c6dc65af4642ab55a8ca
|
Provenance
The following attestation bundles were made for memorylayer_server-0.0.4-py3-none-any.whl:
Publisher:
release.yml on scitrera/memorylayer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorylayer_server-0.0.4-py3-none-any.whl -
Subject digest:
994e4380d066abaced7d7729ed2374a15fe915f47d2b94aabefd5a9d2318cf98 - Sigstore transparency entry: 953344138
- Sigstore integration time:
-
Permalink:
scitrera/memorylayer@3b3d37a2a74f83848316b5ace5c132c156eef7d1 -
Branch / Tag:
refs/tags/v0.0.4 - Owner: https://github.com/scitrera
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3b3d37a2a74f83848316b5ace5c132c156eef7d1 -
Trigger Event:
push
-
Statement type: