Skip to main content

Centralized, governed memory for AI agents

Project description

memoryhub

Centralized, governed memory for AI agents.

MemoryHub provides a persistent memory layer for AI agents running on OpenShift AI, with scope-based access control, multi-tenant isolation, and an immutable audit trail. It works with any agent framework — LlamaStack, LangChain, Claude Code, Cursor, and more.

Status: Alpha (v0.4.0). Core operations are stable; curation and relationship APIs may evolve.

Installation

pip install memoryhub

Requires Python 3.10+.

Quick start

import asyncio
from memoryhub import MemoryHubClient

async def main():
    client = MemoryHubClient(
        url="https://mcp-server.apps.example.com/mcp/",
        auth_url="https://auth-server.apps.example.com",
        client_id="my-agent",
        client_secret="my-secret",
    )

    async with client:
        # Search memories
        results = await client.search("deployment patterns", max_results=5)
        for memory in results.results:
            print(f"[{memory.scope}] {memory.content[:80]}")

        # Write a memory
        written = await client.write(
            "FastAPI is the preferred web framework",
            scope="project",
            weight=0.85,
        )
        print(f"Created: {written.memory.id}")

        # Read it back
        memory = await client.read(written.memory.id)
        print(memory.content)

        # Update it
        updated = await client.update(written.memory.id, weight=0.9)
        print(f"Version: {updated.version}")

        # Campaign-scoped search (requires project enrollment)
        campaign_results = await client.search(
            "shared patterns",
            project_id="my-project",
            domains=["React", "Spring Boot"],
        )

asyncio.run(main())

Environment variables

Instead of passing credentials directly, use MemoryHubClient.from_env():

export MEMORYHUB_URL="https://mcp-server.apps.example.com/mcp/"
export MEMORYHUB_AUTH_URL="https://auth-server.apps.example.com"
export MEMORYHUB_CLIENT_ID="my-agent"
export MEMORYHUB_CLIENT_SECRET="my-secret"
client = MemoryHubClient.from_env()

Project configuration

MemoryHubClient.from_env() (and construction without an explicit project_config argument) auto-discovers .memoryhub.yaml by walking up from the current working directory. If found, the file's retrieval_defaults are applied to outbound calls whenever the caller omits the corresponding argument, and memory_loading.live_subscription controls whether the client subscribes to push updates on connect.

In practice that means a caller can write a plain search and inherit the project's retrieval policy:

client = MemoryHubClient.from_env()
async with client:
    # .memoryhub.yaml sets retrieval_defaults.max_results: 20
    # so this call transparently uses max_results=20
    results = await client.search("deployment patterns")

To opt out of auto-discovery, pass auto_discover_config=False:

client = MemoryHubClient.from_env(auto_discover_config=False)

Or pass an explicit ProjectConfig to the constructor to use a fixed policy regardless of cwd. The recommended way to generate .memoryhub.yaml is the memoryhub-cli wizard (memoryhub config init); see the repo root README for the split between project config (.memoryhub.yaml, committed) and connection config (~/.config/memoryhub/config.json, per-developer).

Sync usage

For non-async contexts, use the _sync variants:

from memoryhub import MemoryHubClient

client = MemoryHubClient(
    url="https://mcp-server.apps.example.com/mcp/",
    auth_url="https://auth-server.apps.example.com",
    client_id="my-agent",
    client_secret="my-secret",
)

results = client.search_sync("deployment patterns")

API reference

Core operations

Method Description
search(query, *, scope, max_results, project_id, domains, ...) Semantic similarity search
read(memory_id, *, include_versions, project_id) Read a memory by ID
write(content, *, scope, weight, project_id, domains, ...) Create a new memory
update(memory_id, *, content, weight, project_id, domains, ...) Update an existing memory

Lifecycle

Method Description
get_history(memory_id, *, max_versions, project_id) Version history
report_contradiction(memory_id, observed_behavior, *, project_id) Flag stale memories

Relationships and curation

Method Description
get_similar(memory_id, *, threshold, project_id) Find similar memories
get_relationships(node_id, *, relationship_type, direction, project_id) Get memory relationships
create_relationship(source_id, target_id, relationship_type, *, project_id) Create a relationship
suggest_merge(memory_a_id, memory_b_id, reasoning, *, project_id) Suggest merging duplicates
set_curation_rule(name, *, tier, action, config) Configure curation rules

All methods accepting project_id use it for campaign enrollment verification. When a target memory has scope="campaign", the server resolves campaign membership through project_id. The domains parameter on search() boosts domain-matching results (non-matching results still appear); on write()/update() it tags the memory with crosscutting knowledge domains.

Authentication

The SDK uses OAuth 2.1 client_credentials grant under the hood. Token management is fully automatic — the SDK fetches, caches, and refreshes JWT access tokens transparently. You never need to handle tokens directly.

Further documentation

The SDK is one surface of the memory-hub monorepo. For deeper context:

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memoryhub-0.4.0.tar.gz (33.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memoryhub-0.4.0-py3-none-any.whl (20.0 kB view details)

Uploaded Python 3

File details

Details for the file memoryhub-0.4.0.tar.gz.

File metadata

  • Download URL: memoryhub-0.4.0.tar.gz
  • Upload date:
  • Size: 33.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memoryhub-0.4.0.tar.gz
Algorithm Hash digest
SHA256 74b915ff9501fe3869c2a6c520eae2b6c7c1d1e4e6eb24f170c2ee53117093b7
MD5 b40204dc463a7b74da384ff24c5d6e5c
BLAKE2b-256 77e0b721172660c1fdc5ca7079aff0bbf246fbbe6cb1c2e1e8f6a82251fc4539

See more details on using hashes here.

Provenance

The following attestation bundles were made for memoryhub-0.4.0.tar.gz:

Publisher: release.yml on redhat-ai-americas/memory-hub

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memoryhub-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: memoryhub-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 20.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memoryhub-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c0fa49fad9e5052badc54b2eeeed5fae8fa3881035667765b0ada46754da0a90
MD5 5f2f01395b1eeeed995805c0833e7362
BLAKE2b-256 d67d3a8f77d9ea2ee8596a4eae0e6dbd83ac139b846ee373c19ad42ae54228fa

See more details on using hashes here.

Provenance

The following attestation bundles were made for memoryhub-0.4.0-py3-none-any.whl:

Publisher: release.yml on redhat-ai-americas/memory-hub

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page