Skip to main content

High-performance AI memory engine with Rust core

Project description

mem7

mem7

LLM-powered long-term memory engine — Rust core with multi-language bindings.

Deeply inspired by Mem0, mem7 reimplements the core memory pipeline in Rust and adds an Ebbinghaus forgetting curve — stale memories naturally decay over time while frequently recalled facts grow stronger, just like human memory.

mem7 extracts factual statements from conversations, deduplicates them against existing memories, and stores the results in vector + graph databases with full audit history.

Install

pip install mem7          # Python
npm install @mem7ai/mem7  # Node.js / TypeScript
cargo add mem7            # Rust

Architecture

Python / TypeScript / Rust API
    │  PyO3 (sync + async) / napi-rs / native
    ▼
Rust Core (tokio async runtime)
    ├── mem7-llm        — OpenAI-compatible LLM client
    ├── mem7-embedding  — Embedding client (OpenAI-compatible / FastEmbed)
    ├── mem7-vector     — Vector index (FlatIndex / Upstash)
    ├── mem7-graph      — Graph store (FlatGraph / Kuzu / Neo4j)
    ├── mem7-history    — SQLite audit trail
    ├── mem7-dedup      — LLM-driven memory deduplication
    ├── mem7-reranker   — Search reranking (Cohere / LLM-based)
    ├── mem7-telemetry  — OpenTelemetry tracing (OTLP export)
    └── mem7-store      — Pipeline orchestrator (MemoryEngine)

Quick Start (Python — Sync)

from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig

config = MemoryConfig(
    llm=LlmConfig(
        base_url="http://localhost:11434/v1",
        api_key="ollama",
        model="qwen2.5:7b",
    ),
    embedding=EmbeddingConfig(
        base_url="http://localhost:11434/v1",
        api_key="ollama",
        model="mxbai-embed-large",
        dims=1024,
    ),
)

m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = m.search("What sports does Alice play?", user_id="alice")

Quick Start (Python — Async)

import asyncio
from mem7 import AsyncMemory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig

async def main():
    config = MemoryConfig(
        llm=LlmConfig(
            base_url="http://localhost:11434/v1",
            api_key="ollama",
            model="qwen2.5:7b",
        ),
        embedding=EmbeddingConfig(
            base_url="http://localhost:11434/v1",
            api_key="ollama",
            model="mxbai-embed-large",
            dims=1024,
        ),
    )

    m = await AsyncMemory.create(config=config)
    await m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
    results = await m.search("What sports does Alice play?", user_id="alice")

asyncio.run(main())

Quick Start (TypeScript)

import { MemoryEngine } from "@mem7ai/mem7";

const engine = await MemoryEngine.create(JSON.stringify({
  llm: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "qwen2.5:7b" },
  embedding: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "mxbai-embed-large", dims: 1024 },
}));

await engine.add([{ role: "user", content: "I love playing tennis and my coach is Sarah." }], "alice");
const results = await engine.search("What sports does Alice play?", "alice");

Supported Providers

mem7 uses a single OpenAI-compatible client for both LLM and Embedding, which covers any service that exposes the OpenAI API format. This includes most major providers out of the box.

LLMs

Provider Status Notes
OpenAI :white_check_mark: Native support
Ollama :white_check_mark: Via OpenAI-compatible API
vLLM :white_check_mark: Via OpenAI-compatible API
Groq :white_check_mark: Via OpenAI-compatible API
Together :white_check_mark: Via OpenAI-compatible API
DeepSeek :white_check_mark: Via OpenAI-compatible API
xAI (Grok) :white_check_mark: Via OpenAI-compatible API
LM Studio :white_check_mark: Via OpenAI-compatible API
Azure OpenAI :white_check_mark: Via OpenAI-compatible API
Anthropic :x: Requires native SDK
Gemini :x: Requires native SDK
Vertex AI :x: Requires native SDK
AWS Bedrock :x: Requires native SDK
LiteLLM :x: Python proxy
Sarvam :x: Requires native SDK
LangChain :x: Python framework

Embeddings

Provider Status Notes
OpenAI :white_check_mark: Native support
Ollama :white_check_mark: Via OpenAI-compatible API
Together :white_check_mark: Via OpenAI-compatible API
LM Studio :white_check_mark: Via OpenAI-compatible API
Azure OpenAI :white_check_mark: Via OpenAI-compatible API
FastEmbed :white_check_mark: Local ONNX inference (feature flag fastembed)
Hugging Face :x: Requires native SDK
Gemini :x: Requires native SDK
Vertex AI :x: Requires native SDK
AWS Bedrock :x: Requires native SDK
LangChain :x: Python framework

Vector Stores

Provider Status Notes
In-memory (FlatIndex) :white_check_mark: Built-in, good for dev
Upstash Vector :white_check_mark: REST API, serverless
Qdrant :x:
Chroma :x:
pgvector :x:
Milvus :x:
Pinecone :x:
Redis :x:
Weaviate :x:
Elasticsearch :x:
OpenSearch :x:
FAISS :x:
MongoDB :x:
Supabase :x:
Azure AI Search :x:
Vertex AI Vector Search :x:
Databricks :x:
Cassandra :x:
S3 Vectors :x:
Baidu :x:
Neptune :x:
Valkey :x:
LangChain :x:

Rerankers

Provider Status Notes
Cohere :white_check_mark: Cohere v2 rerank API
LLM-based :white_check_mark: Any OpenAI-compatible LLM
Jina AI :x: Planned
Cross-encoder :x: Planned

Graph Stores

Provider Status Notes
In-memory (FlatGraph) :white_check_mark: Built-in, good for dev/testing
Kuzu (embedded) :white_check_mark: Cypher-based, no server needed (feature flag kuzu)
Neo4j :white_check_mark: Production-grade, Bolt protocol
Memgraph :x: Planned
Amazon Neptune :x: Planned

Language Bindings

Language Status
Python (sync + async) :white_check_mark: PyPI: pip install mem7
TypeScript / Node.js :white_check_mark: npm: npm install @mem7ai/mem7
Rust :white_check_mark: crates.io: cargo add mem7
Go Planned

Vector Store Backends

Built-in FlatIndex (default) — in-memory brute-force, good for development:

from mem7.config import VectorConfig

VectorConfig(provider="flat", dims=1024)

Upstash Vector — managed cloud vector database:

VectorConfig(
    provider="upstash",
    collection_name="my-namespace",
    dims=1024,
    upstash_url="https://your-index.upstash.io",
    upstash_token="your-token",
)

Local Embedding (FastEmbed)

mem7 supports fully local embedding via FastEmbed (ONNX Runtime). No API calls needed — models are downloaded and run locally.

Requires the fastembed feature flag:

# Cargo.toml
mem7 = { version = "0.2", features = ["fastembed"] }
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig

config = MemoryConfig(
    llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:7b"),
    embedding=EmbeddingConfig(
        provider="fastembed",
        model="AllMiniLML6V2",  # or "BGEBaseENV15", "NomicEmbedTextV15", etc.
        dims=384,
    ),
)

m = Memory(config=config)  # model downloaded on first use

Supported models include AllMiniLML6V2, BGEBaseENV15, BGESmallENV15, NomicEmbedTextV1, MxbaiEmbedLargeV1, GTEBaseENV15, and their quantized variants.

Graph Memory (Dual-Path Recall)

When graph is configured, mem7 runs dual-path recall: vector search and graph search execute concurrently via tokio::join!, returning both factual memories and entity relations.

On add(), the engine extracts entities and relations from conversations using LLM (JSON mode) and stores them in the graph alongside the vector memories.

FlatGraph (in-memory, for development):

from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig, GraphConfig

config = MemoryConfig(
    llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:7b"),
    embedding=EmbeddingConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="mxbai-embed-large", dims=1024),
    graph=GraphConfig(provider="flat"),
)

m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")

results = m.search("What sports does Alice play?", user_id="alice")
# results["memories"]   -> vector search results
# results["relations"]  -> graph relations (e.g. USER -[loves_playing]-> tennis)

Neo4j (production):

GraphConfig(
    provider="neo4j",
    neo4j_url="bolt://localhost:7687",
    neo4j_username="neo4j",
    neo4j_password="password",
)

Kuzu (embedded, requires kuzu feature flag):

GraphConfig(provider="kuzu", kuzu_db_path="./my_graph.kuzu")

The graph LLM can be configured separately (e.g. use a cheaper model for extraction):

GraphConfig(
    provider="flat",
    llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:3b"),
)

Memory Decay (Forgetting Curve)

mem7 implements an Ebbinghaus-inspired forgetting curve that deprioritizes stale memories over time while automatically strengthening memories that are frequently recalled — just like human memory.

When enabled, every memory carries two extra metadata fields: last_accessed_at (the last time it was written or retrieved) and access_count (how many times it has been retrieved). These are used to compute a retention score that modulates the raw similarity score during search and dedup:

$$S = S_0 \cdot \bigl(1 + \alpha \cdot \ln(1 + n)\bigr)$$

$$R(t) = \exp!\Bigl(-\Bigl(\frac{t - \tau}{S}\Bigr)^{!\gamma}\Bigr)$$

$$\widetilde{R}(t) = \rho + (1 - \rho) \cdot R(t)$$

$$\text{score}{\text{final}} = \text{sim}{\text{raw}} \times \widetilde{R}(t)$$

where $S_0$ = base half-life, $\alpha$ = rehearsal factor, $n$ = access count, $\tau$ = last accessed time, $\gamma$ = decay shape, $\rho$ = min retention floor.

  • Decay over time: memories you haven't touched in weeks get deprioritized, but never disappear (the floor parameter ensures a minimum retention of 10% by default).
  • Rehearsal strengthening: each time a memory is successfully retrieved via search(), its access_count is incremented and last_accessed_at is reset asynchronously — making it harder to forget next time.
  • Cue-dependent retrieval: a highly relevant query naturally "wakes up" old memories because raw_similarity is high, even if the retention score is low. No separate sigmoid gate is needed — the multiplicative structure handles it.
  • Write-path aware: decay is also applied during the dedup phase of add(), so stale memories appear less "close" to new facts and are more likely to be updated or replaced.

Enabling Decay

Decay is off by default. Enable it via config:

Python:

from mem7.config import MemoryConfig, DecayConfig

config = MemoryConfig(
    # ... llm, embedding, etc.
    decay=DecayConfig(enabled=True),
)

TypeScript:

const engine = await MemoryEngine.create(JSON.stringify({
  // ... llm, embedding, etc.
  decay: { enabled: true },
}));

Rust:

use mem7_config::{MemoryEngineConfig, DecayConfig};

let config = MemoryEngineConfig {
    decay: Some(DecayConfig { enabled: true, ..Default::default() }),
    ..Default::default()
};

Tuning Parameters

Parameter Default Description
base_half_life_secs 604800.0 Base stability in seconds (7 days) before any rehearsal bonus
decay_shape 0.8 Stretched-exponential shape (0 < gamma <= 1); lower = slower initial decay
min_retention 0.1 Floor so no memory fully vanishes
rehearsal_factor 0.5 How much each retrieval increases stability

Backward Compatibility

  • Old memories without last_accessed_at or access_count gracefully degrade: age falls back to updated_at then created_at, and access count defaults to 0.
  • No migration needed — new fields are written on the next add() or update() call.
  • When decay is disabled (the default), scoring behavior is identical to previous versions.

Context-Aware Scoring (Session-Aware Recall)

Pure embedding similarity can conflate semantic closeness with contextual relevance — for example, a design preference like "always investigate root cause first" may score high when searching "fix Chrome CDP bug" because both relate to debugging. With context-aware scoring, mem7 automatically classifies queries and memories to boost what's relevant and demote what isn't.

How It Works

  1. Write path — each extracted fact is tagged with a memory_type (factual, preference, procedural, episodic) during LLM fact extraction.
  2. Read path — each search query is classified into a task_type (troubleshooting, design, factual_lookup, planning, general) via a lightweight LLM call that runs in parallel with embedding, adding zero sequential latency.
  3. A context coefficient is looked up from a (memory_type, task_type) weight matrix and multiplied into the score:

$$\text{score}_{\text{final}} = \text{similarity} \times \text{decay} \times \text{context coeff}$$

Default Weight Matrix

troubleshooting design factual_lookup planning general
factual 1.0 0.5 1.0 0.7 1.0
preference 0.3 1.0 0.3 0.8 0.8
procedural 0.8 0.5 0.5 1.0 0.7
episodic 0.5 0.5 0.5 0.5 0.7

Enabling Context-Aware Scoring

Context scoring is off by default. Enable it via config:

Python:

from mem7.config import MemoryConfig, ContextConfig

config = MemoryConfig(
    # ... llm, embedding, etc.
    context=ContextConfig(enabled=True),
)

TypeScript:

const engine = await MemoryEngine.create(JSON.stringify({
  // ... llm, embedding, etc.
  context: { enabled: true },
}));

Rust:

use mem7_config::{MemoryEngineConfig, ContextConfig};

let config = MemoryEngineConfig {
    context: Some(ContextConfig { enabled: true, ..Default::default() }),
    ..Default::default()
};

You can also provide custom weights to override the defaults:

ContextConfig(
    enabled=True,
    weights={
        "preference": {"troubleshooting": 0.1, "design": 1.0},
    },
)

Overriding Task Type

If the caller already knows the task context, it can pass task_type directly to skip the LLM classification call:

results = m.search("fix Chrome CDP timeout", user_id="alice", task_type="troubleshooting")

Backward Compatibility

  • Context scoring defaults to disabled — zero impact on existing users.
  • Old memories without memory_type are treated as "factual" (safe default).
  • When context is disabled, the scoring pipeline is identical to previous versions.

OpenClaw Plugin

mem7 ships an official OpenClaw memory plugin that replaces the built-in memory backend with LLM-powered fact extraction, graph relations, dedup, and the forgetting curve — all driven by mem7's Rust core.

Install

openclaw plugins install @mem7ai/openclaw-mem7

Activate

In ~/.openclaw/openclaw.json:

{
  "plugins": {
    "slots": { "memory": "openclaw-mem7" },
    "entries": {
      "openclaw-mem7": {
        "enabled": true,
        "config": {
          "llm": { "base_url": "http://localhost:11434/v1", "api_key": "ollama", "model": "qwen2.5:7b" },
          "embedding": { "base_url": "http://localhost:11434/v1", "api_key": "ollama", "model": "mxbai-embed-large", "dims": 1024 },
          "graph": { "provider": "flat" },
          "decay": { "enabled": true }
        }
      }
    }
  }
}

What it does

  • Auto-recall (before_prompt_build): before each agent turn, the plugin searches mem7 for relevant memories and injects them into the system prompt.
  • Auto-capture (agent_end): after each turn, the user + assistant messages are sent through mem7's fact extraction pipeline, automatically storing new facts and deduplicating against existing ones.
  • Tools: the plugin registers memory_search, memory_get, and memory_store tools that the agent can call explicitly.
  • Forgetting curve: decay is enabled by default so stale facts naturally fade, while frequently recalled memories stay strong.

See packages/openclaw-mem7/ for full documentation.

Observability (OpenTelemetry)

mem7 integrates with OpenTelemetry via tracing-opentelemetry. When enabled, every add(), search(), get(), update(), delete() call emits a trace span that is exported via OTLP/gRPC to any compatible collector (Jaeger, Grafana Tempo, Datadog, etc.).

Python:

from mem7 import Memory, init_telemetry, shutdown_telemetry

init_telemetry(otlp_endpoint="http://localhost:4317", service_name="my-app")

m = Memory(config=config)
m.add("I love playing tennis.", user_id="alice")
# spans are exported automatically

shutdown_telemetry()  # flush before exit

TypeScript:

import { MemoryEngine, initTelemetry, shutdownTelemetry } from "@mem7ai/mem7";

initTelemetry(JSON.stringify({ otlp_endpoint: "http://localhost:4317", service_name: "my-app" }));

const engine = await MemoryEngine.create(configJson);
await engine.add([{ role: "user", content: "I love tennis." }], "alice");

shutdownTelemetry();

Rust (requires otel feature):

// Cargo.toml: mem7 = { version = "0.2", features = ["otel"] }
use mem7::{TelemetryConfig, telemetry};

telemetry::init(&TelemetryConfig::default())?;
// ... use MemoryEngine as usual ...
telemetry::shutdown();

Examples

See the examples/ directory:

Development

Prerequisites

  • Rust 1.85+ (stable)
  • Python 3.10+
  • Node.js 22+
  • maturin

Build

python -m venv .venv && source .venv/bin/activate
pip install maturin pydantic

# Development build (debug, fast iteration)
maturin develop

# Release build
maturin develop --release

Test

# Rust tests
cargo test --workspace

# Clippy
cargo clippy --workspace --all-targets -- -D warnings

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem7-0.3.2.tar.gz (112.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

mem7-0.3.2-cp310-abi3-win_amd64.whl (5.3 MB view details)

Uploaded CPython 3.10+Windows x86-64

mem7-0.3.2-cp310-abi3-manylinux_2_24_aarch64.whl (6.1 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.24+ ARM64

mem7-0.3.2-cp310-abi3-macosx_11_0_arm64.whl (5.6 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

mem7-0.3.2-cp310-abi3-macosx_10_12_x86_64.whl (5.8 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

mem7-0.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.3 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

File details

Details for the file mem7-0.3.2.tar.gz.

File metadata

  • Download URL: mem7-0.3.2.tar.gz
  • Upload date:
  • Size: 112.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem7-0.3.2.tar.gz
Algorithm Hash digest
SHA256 2d84c122ba2898878c14a21272f388b97e044748de5d3b15dd318752cf11aa67
MD5 9dd43c351150eef3717cfedce0479062
BLAKE2b-256 2857cb02d86ebc0cbb2a4594bc2bb0a17cac520a90290e50fca1dc5102ea57db

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.3.2.tar.gz:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.3.2-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: mem7-0.3.2-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 5.3 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem7-0.3.2-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 ea6d8fd8225873f863266a515496f5b2e8b16d42a1702f47b9d4698ee46f8344
MD5 2639730021b68a806b6d8b1367888372
BLAKE2b-256 b92aa38579b3bfe7f3f023a4fd451ee1b0bf0e722b6c318b6f5eee4d13fa0f2f

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.3.2-cp310-abi3-win_amd64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.3.2-cp310-abi3-manylinux_2_24_aarch64.whl.

File metadata

File hashes

Hashes for mem7-0.3.2-cp310-abi3-manylinux_2_24_aarch64.whl
Algorithm Hash digest
SHA256 81eb5d56c84dc3f49052c8d5a2544f5cd2a28cb6bbfa91c584721f1e2ae974f1
MD5 637425ffa058d347b4f0ce6f5261ae75
BLAKE2b-256 0d12b22fc2981aca002990d1cb34ef7961bfe390a0766adf2670e775d1b5f4dc

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.3.2-cp310-abi3-manylinux_2_24_aarch64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.3.2-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

  • Download URL: mem7-0.3.2-cp310-abi3-macosx_11_0_arm64.whl
  • Upload date:
  • Size: 5.6 MB
  • Tags: CPython 3.10+, macOS 11.0+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mem7-0.3.2-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8f065eaca7d2a3f1dc0d787d7ff2962b2fa304e43d40e66f22b384c5467ac891
MD5 d126e9767f3f0356b45303342f1c7618
BLAKE2b-256 062e7cfcd0280bacf15c79a382f79a9973afc618513a3ee9abacf24f05b0569e

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.3.2-cp310-abi3-macosx_11_0_arm64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.3.2-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for mem7-0.3.2-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 1193e0b25a06348d1552a6f123d2c92616d7fb6a2ba9f21c6f74fe9712f618c4
MD5 d555ea64d57d7d3cba64fdc62e8fe6b5
BLAKE2b-256 54e2d039d605fbe0e8d6c2d7b36c904a84aa485fdcb586691c61210dada9f02c

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.3.2-cp310-abi3-macosx_10_12_x86_64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem7-0.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for mem7-0.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4a7b71439b1ab41efe6235da3fe99e2c4bae95e4ffa37681d77a5ae17348f0f4
MD5 7e386550f3cef4a8863543b10d59a1bf
BLAKE2b-256 722953d2047fbee8229b50c1a3446d54e36c72c683a2bc67947115862b5aef37

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem7-0.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mem7ai/mem7

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page