Skip to main content

Python client for GigaEvo Memory Module — persistent storage for CARL artifacts

Project description

gigaevo-memory

Python client for the GigaEvo Memory Module: persistent storage for CARL artifacts such as chains, steps, agents, and memory cards.

mmar-carl is a required dependency and is installed automatically with the client.

Installation

pip install gigaevo-memory

For vector or hybrid search with the default local embedding provider, also install sentence-transformers:

pip install gigaevo-memory sentence-transformers

If you are running against this repository's local Docker stack, bring the API up and apply migrations first:

make up
make migrate

Quick Start

Save and load a chain

from mmar_carl import ContextSearchConfig, LLMStepDescription, ReasoningChain
from gigaevo_memory import MemoryClient

chain = ReasoningChain(
    steps=[
        LLMStepDescription(
            number=1,
            title="Analyze text",
            aim="Summarize the input text",
            reasoning_questions="What is the main topic? What are the key points?",
            stage_action="Read the input and produce a concise summary",
            example_reasoning="The text is about X, with key points A, B, and C.",
        )
    ],
    max_workers=1,
    metadata={"name": "Simple Analysis Chain"},
    search_config=ContextSearchConfig(strategy="substring"),
)

with MemoryClient(base_url="http://localhost:8000") as client:
    ref = client.save_chain(
        chain=chain,
        name="simple_analysis_chain",
        tags=["demo", "analysis"],
    )

    loaded_chain = client.get_chain(ref.entity_id, channel="latest")
    loaded_chain_dict = client.get_chain_dict(ref.entity_id, channel="latest")
    versions = client.list_versions(ref.entity_id, entity_type="chain")

    print(ref.entity_id)
    print(len(loaded_chain.steps))
    print(loaded_chain_dict["metadata"]["name"])
    print([v.version_number for v in versions])

Save and search memory cards

from gigaevo_memory import MemoryClient, SearchType

memory_card = {
    "description": "Batch Processing Pattern",
    "explanation": "Use when work can be split into independent chunks.",
    "keywords": ["batch", "parallel", "etl"],
    "category": "pattern_optimization",
}

with MemoryClient(base_url="http://localhost:8000") as client:
    ref = client.save_memory_card(
        memory_card=memory_card,
        name="Batch Processing Pattern",
        tags=memory_card["keywords"],
        when_to_use=memory_card["explanation"],
    )

    card = client.get_memory_card(ref.entity_id)
    results = client.search(
        query="batch processing",
        search_type=SearchType.BM25,
        entity_type="memory_card",
        top_k=5,
    )

    print(card.description)
    print([item.description for item in results])

Search APIs

The client exposes unified search for memory-card retrieval:

  • search(query, search_type=..., top_k=..., entity_type="memory_card") Unified BM25, vector, or hybrid search. Returns list[MemoryCardSpec].

Use unified search for memory-card retrieval:

from gigaevo_memory import MemoryClient, SearchType

with MemoryClient() as client:
    bm25_hits = client.search(
        query="batch processing",
        search_type=SearchType.BM25,
        entity_type="memory_card",
    )

    hybrid_hits = client.search(
        query="performance optimization",
        search_type=SearchType.HYBRID,
        entity_type="memory_card",
        hybrid_weights=(0.3, 0.7),
    )

Batch search is also available for memory cards:

from gigaevo_memory import MemoryClient, SearchType

with MemoryClient() as client:
    results = client.batch_search(
        queries=["batch processing", "etl pipeline", "parallel execution"],
        search_type=SearchType.BM25,
        top_k=3,
    )

Vector and hybrid search requirements

Vector-capable search has two runtime requirements:

  • The client must be able to generate embeddings. By default this means installing sentence-transformers, or passing a custom embedding_provider.
  • The server must have vector search enabled. If the API is started with vector search disabled, vector and hybrid requests return 503.

Version management

The client includes helpers for versioned entities and channel management:

from gigaevo_memory import MemoryClient

with MemoryClient() as client:
    entity_id = "your-chain-id"
    version_id = "your-version-id"
    from_version = "older-version-id"
    to_version = "newer-version-id"

    versions = client.list_versions(entity_id, entity_type="chain")
    detail = client.get_version(entity_id, version_id, entity_type="chain")
    diff = client.diff_versions(entity_id, from_version, to_version, entity_type="chain")
    client.pin_channel(entity_id, channel="stable", version_id=version_id, entity_type="chain")
    client.promote(entity_id, from_channel="latest", to_channel="stable", entity_type="chain")

Watching for updates

Use watch_chain() to subscribe to SSE updates for a chain:

from gigaevo_memory import MemoryClient

with MemoryClient() as client:
    entity_id = "your-chain-id"

    sub = client.watch_chain(
        entity_id,
        callback=lambda new_chain: print(f"Chain updated: {len(new_chain.steps)} steps"),
    )

    # ... later ...
    sub.stop()

Cache policies

from gigaevo_memory import CachePolicy, MemoryClient

# TTL-based cache (default: 300 seconds)
client = MemoryClient(cache_policy=CachePolicy.TTL, cache_ttl=300)

# Conditional GET using ETag when a cached entry exists
client = MemoryClient(cache_policy=CachePolicy.FRESHNESS_CHECK)

CachePolicy.SSE_PUSH exists as a cache policy enum, but normal entity reads do not automatically attach an SSE listener. For push-style updates today, use watch_chain() explicitly.

Development

make client-install
make client-test
make client-lint
make client-build

Examples

Runnable example scripts live in examples/:

  • upload_chain.py
  • download_chain.py
  • update_chain.py
  • run_chain.py
  • upload_memory_card.py
  • memory_cards_demo.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gigaevo_memory-0.2.1.tar.gz (19.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gigaevo_memory-0.2.1-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file gigaevo_memory-0.2.1.tar.gz.

File metadata

  • Download URL: gigaevo_memory-0.2.1.tar.gz
  • Upload date:
  • Size: 19.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for gigaevo_memory-0.2.1.tar.gz
Algorithm Hash digest
SHA256 86806065171283374617f44067feeaa90840c67e19eac0febc9c60f3b46386e0
MD5 b8cf767a620ed060878f4e59651491c6
BLAKE2b-256 a3529464b37b265244b939f598d449fe0c0cf659073d3d1417c55017c10e8fd7

See more details on using hashes here.

File details

Details for the file gigaevo_memory-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: gigaevo_memory-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for gigaevo_memory-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 54d99a7fabc84fdd886490d08c97837733ad8db7ac2fa54cc5feeee38ef5fcbb
MD5 287311d47244c2de6ab7d77506fd63c6
BLAKE2b-256 a82b3ea9bc039b3847e5c540509cdadc4c6fd1d963acb6481bb11d57cd3e5af9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page