Skip to main content

MemoryLayer.ai - API-first memory infrastructure for LLM-powered agents (open source core)

Project description

MemoryLayer.ai Server

API-first memory infrastructure for LLM-powered agents.

MemoryLayer provides cognitive memory capabilities for AI agents, including episodic, semantic, procedural, and working memory with vector-based retrieval, graph-based associations, and server-side computation sandboxes.

Features

  • Cognitive Memory Architecture — Episodic, semantic, procedural, and working memory types
  • Vector Search — SQLite with sqlite-vec for efficient similarity search
  • Knowledge Graph — 60+ relationship types organized into 11 categories for memory associations
  • Context Environment — Server-side Python sandboxes for memory analysis and computation
  • Session Management — Working memory with TTL and commit to long-term storage
  • REST API — Full-featured HTTP API for all memory operations
  • Multiple Embedding Providers — OpenAI, Google GenAI, sentence-transformers (local), and mock (testing)
  • Health Endpoints/health and /health/ready for monitoring and readiness checks

Installation

# Basic installation
pip install memorylayer-server

# With OpenAI embeddings
pip install memorylayer-server[openai]

# With Google GenAI embeddings
pip install memorylayer-server[google]

# With local embeddings (sentence-transformers)
pip install memorylayer-server[local]

# All embedding providers
pip install memorylayer-server[all]

Package name: memorylayer-server (PyPI) Import name: memorylayer_server

Quick Start

Start the HTTP Server

# Start on default port (61001)
memorylayer serve

# Custom port
memorylayer serve --port 8080

# Bind to all interfaces
memorylayer serve --host 0.0.0.0

# Debug mode
memorylayer serve --verbose

Docker

The official Docker image comes with all optional dependencies pre-installed and defaults to local embeddings (no API key required):

docker run -d \
  --name memorylayer \
  -p 61001:61001 \
  -v memorylayer-data:/data \
  scitrera/memorylayer-server

With OpenAI embeddings:

docker run -d \
  --name memorylayer \
  -p 61001:61001 \
  -v memorylayer-data:/data \
  -e MEMORYLAYER_EMBEDDING_PROVIDER=openai \
  -e MEMORYLAYER_EMBEDDING_OPENAI_API_KEY=sk-... \
  scitrera/memorylayer-server

API Usage

The server exposes a REST API. Use any HTTP client, or install the Python SDK (pip install memorylayer-client) for a typed client:

from memorylayer import MemoryLayerClient

async with MemoryLayerClient(base_url="http://localhost:61001") as client:
    # Store a memory
    memory = await client.remember(
        content="User prefers Python for backend development",
        type="semantic",
        importance=0.8,
        tags=["preferences", "programming"]
    )

    # Recall memories
    results = await client.recall(
        query="What programming languages does the user like?",
        limit=5
    )

    # Create associations
    await client.associate(
        source_id=memory.id,
        target_id=other_memory.id,
        relationship="related_to",
        strength=0.9
    )

Configuration

Environment Variables

Variable Default Description
MEMORYLAYER_SERVER_HOST 127.0.0.1 Server bind address
MEMORYLAYER_SERVER_PORT 61001 Server port
MEMORYLAYER_DATA_DIR ~/.config/memorylayer-server Data directory
MEMORYLAYER_SQLITE_STORAGE_PATH memorylayer.db SQLite database path (relative to data dir)
MEMORYLAYER_EMBEDDING_PROVIDER local Embedding provider (openai, google, local, mock)
MEMORYLAYER_EMBEDDING_OPENAI_API_KEY OpenAI API key
MEMORYLAYER_EMBEDDING_GOOGLE_API_KEY Google API key

Embedding Providers

Local (sentence-transformers) — Default provider, no API key required:

pip install memorylayer-server[local]
export MEMORYLAYER_EMBEDDING_PROVIDER=local
memorylayer serve

OpenAI:

pip install memorylayer-server[openai]
export MEMORYLAYER_EMBEDDING_PROVIDER=openai
export MEMORYLAYER_EMBEDDING_OPENAI_API_KEY=sk-...
memorylayer serve

Google GenAI:

pip install memorylayer-server[google]
export MEMORYLAYER_EMBEDDING_PROVIDER=google
export MEMORYLAYER_EMBEDDING_GOOGLE_API_KEY=...
memorylayer serve

Mock (testing only):

export MEMORYLAYER_EMBEDDING_PROVIDER=mock
memorylayer serve

LLM Provider (Optional)

Some features (reflection, smart extraction, context environment queries) require an LLM provider configured via profiles:

# OpenAI
export MEMORYLAYER_LLM_PROFILE_DEFAULT_PROVIDER=openai
export MEMORYLAYER_LLM_PROFILE_DEFAULT_API_KEY=sk-...

# Anthropic Claude
export MEMORYLAYER_LLM_PROFILE_DEFAULT_PROVIDER=anthropic
export MEMORYLAYER_LLM_PROFILE_DEFAULT_API_KEY=sk-ant-...

# Google Gemini
export MEMORYLAYER_LLM_PROFILE_DEFAULT_PROVIDER=google
export MEMORYLAYER_LLM_PROFILE_DEFAULT_API_KEY=...

Profile configuration variables (replace DEFAULT with any profile name):

Variable Description
MEMORYLAYER_LLM_PROFILE_<NAME>_PROVIDER Provider (openai, anthropic, google)
MEMORYLAYER_LLM_PROFILE_<NAME>_API_KEY API key
MEMORYLAYER_LLM_PROFILE_<NAME>_MODEL Model name override
MEMORYLAYER_LLM_PROFILE_<NAME>_BASE_URL Custom API base URL
MEMORYLAYER_LLM_PROFILE_<NAME>_MAX_TOKENS Max response tokens
MEMORYLAYER_LLM_PROFILE_<NAME>_TEMPERATURE Sampling temperature

Without an LLM provider, core memory operations (remember, recall, forget, associate) work normally, but synthesis features will be unavailable.

Context Environment

The Context Environment provides server-side Python sandboxes for memory analysis and computation. See Context Environment documentation for details.

Configuration:

Variable Default Description
MEMORYLAYER_CONTEXT_EXECUTOR smolagents Executor backend (smolagents or restricted)
MEMORYLAYER_CONTEXT_MAX_EXEC_SECONDS 30 Timeout per code execution
MEMORYLAYER_CONTEXT_MAX_OUTPUT_CHARS 50000 Max captured stdout characters
MEMORYLAYER_CONTEXT_QUERY_MAX_TOKENS 4096 Max tokens for server-side LLM queries
MEMORYLAYER_CONTEXT_MAX_MEMORY_BYTES 268435456 Memory limit per sandbox (256 MB)
MEMORYLAYER_CONTEXT_RLM_MAX_ITERATIONS 10 Max iterations for RLM loops
MEMORYLAYER_CONTEXT_RLM_MAX_EXEC_SECONDS 120 Total timeout for RLM loops
MEMORYLAYER_CONTEXT_MAX_OPERATIONS 1000000 Max operations per sandbox execution

Storage

The default storage backend is SQLite with sqlite-vec for vector operations. The database file defaults to ~/.config/memorylayer-server/memorylayer.db and contains all memories, embeddings, associations, and session data.

Override the data directory:

export MEMORYLAYER_DATA_DIR=/var/lib/memorylayer

Override the database path:

export MEMORYLAYER_SQLITE_STORAGE_PATH=/var/lib/memorylayer/data.db

Recall Modes

The active recall mode is RAG (vector similarity + graph traversal). LLM and Hybrid modes are deprecated.

MCP Integration

The Model Context Protocol (MCP) server is a separate TypeScript package (@scitrera/memorylayer-mcp-server), not part of this Python server CLI.

To use MemoryLayer with Claude Code or Claude Desktop:

  1. Start the HTTP server: memorylayer serve
  2. Install and configure the MCP server: npm install -g @scitrera/memorylayer-mcp-server

See the MCP Server documentation for setup instructions.

Health Checks

  • GET /health — Basic health check (returns immediately)
  • GET /health/ready — Readiness check (verifies storage connectivity)

The Docker image includes a built-in health check at /health (every 30s, 10s startup grace period).

Documentation

License

Apache 2.0 License -- see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memorylayer_server-0.0.5.tar.gz (162.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memorylayer_server-0.0.5-py3-none-any.whl (241.7 kB view details)

Uploaded Python 3

File details

Details for the file memorylayer_server-0.0.5.tar.gz.

File metadata

  • Download URL: memorylayer_server-0.0.5.tar.gz
  • Upload date:
  • Size: 162.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memorylayer_server-0.0.5.tar.gz
Algorithm Hash digest
SHA256 0a7bd1e1475115403b64b68898eb7dfe7434d445cf18509215774a3350747eac
MD5 ebc077361bb9c7e2088f72886edd346c
BLAKE2b-256 59a4782d1568de640886f00bf64ccd9c571ed5f28fb173da81773ad9c9e9e394

See more details on using hashes here.

Provenance

The following attestation bundles were made for memorylayer_server-0.0.5.tar.gz:

Publisher: release.yml on scitrera/memorylayer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memorylayer_server-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for memorylayer_server-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 6b4aeb9d13583cd9635d6b71d6a660f2e0ab5188f0b8fe1bb0457bea4983de55
MD5 d9b96bef1b69b641acdc9446475364c6
BLAKE2b-256 b0495dca4cbfa8080aa39095c2cce9b96a2f9c68a5d8c4a091d5beae12145c72

See more details on using hashes here.

Provenance

The following attestation bundles were made for memorylayer_server-0.0.5-py3-none-any.whl:

Publisher: release.yml on scitrera/memorylayer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page