Skip to main content

Python SDK for Capsule

Project description

Capsule SDK

The Capsule SDK is the recommended client surface for registering workloads, triggering builds, allocating runners, and interacting with running Capsule sandboxes from Python.

Requirements

  • Python >= 3.10
  • access to a running Capsule control plane
  • an API token if your deployment requires authenticated requests

Installation

pip install capsule-sdk

For local development:

cd sdk/python
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

Configuration

The SDK can be configured directly in code or through environment variables.

Parameter Env var Default
base_url CAPSULE_BASE_URL http://localhost:8080
token CAPSULE_TOKEN None
request_timeout CAPSULE_REQUEST_TIMEOUT 30.0
startup_timeout CAPSULE_STARTUP_TIMEOUT 45.0
operation_timeout CAPSULE_OPERATION_TIMEOUT 120.0

Example:

export CAPSULE_BASE_URL="http://localhost:8080"
export CAPSULE_TOKEN="my-token"

Quickstart

The fastest way to get started is the high-level workloads API.

from capsule_sdk import CapsuleClient, RunnerConfig

cfg = (
    RunnerConfig("My dev sandbox")
    .with_base_image("ubuntu:22.04")
    .with_commands(["apt-get update", "apt-get install -y python3"])
    .with_tier("m")
    .with_ttl(3600)
    .with_auto_pause(True)
    .with_auto_rollout(True)
)

with CapsuleClient(base_url="http://localhost:8080", token="my-token") as client:
    workload = client.workloads.onboard(cfg)

    with client.workloads.start(workload) as runner:
        output, code = runner.exec_collect("python3", "-c", "print('hello')")
        print(output, code)

        runner.write_text("/workspace/hello.txt", "hello")
        print(runner.read_text("/workspace/hello.txt"))

Onboard From YAML

You can also onboard directly from an onboard.yaml-style file:

from capsule_sdk import CapsuleClient

with CapsuleClient(base_url="http://localhost:8080", token="my-token") as client:
    workload = client.workloads.onboard_yaml(
        "examples/afs/onboard.yaml",
        name="afs-sandbox",
    )

    with client.workloads.start("afs-sandbox") as runner:
        print(runner.read_text("/etc/hostname"))

The AFS example is an example workload name, not a special SDK mode. See examples/afs/ for the underlying config shape.

Async Quickstart

Use the async client in event-loop-native applications:

import asyncio

from capsule_sdk import AsyncCapsuleClient, RunnerConfig


async def main() -> None:
    cfg = (
        RunnerConfig("My async sandbox")
        .with_base_image("ubuntu:22.04")
        .with_commands(["echo async-ready"])
        .with_tier("m")
        .with_ttl(3600)
        .with_auto_pause(True)
    )

    async with AsyncCapsuleClient(base_url="http://localhost:8080", token="my-token") as client:
        workload = await client.workloads.onboard(cfg)
        runner = await client.workloads.start(workload)

        async with runner:
            result = await runner.exec_collect("sh", "-lc", "printf hello")
            print(result.stdout, result.exit_code)


asyncio.run(main())

Low-Level APIs

For finer control, work directly with the resource clients:

from capsule_sdk import CapsuleClient

with CapsuleClient(base_url="http://localhost:8080", token="my-token") as client:
    with client.runners.allocate_ready("my-workload-key") as runner:
        for event in runner.exec("echo", "hello"):
            if event.type == "stdout":
                print(event.data, end="")

Key low-level surfaces:

  • client.runners
  • client.workloads
  • client.snapshots
  • client.runner_configs

Multi-Tenant Builder Config

When running builds in a tenant's own GCP project, set the tenant GCE config on RunnerConfig. Builds will launch in the tenant's project while pulling the base builder image from central.

cfg = (
    RunnerConfig("my-sandbox")
    .with_base_image("ubuntu:22.04")
    .with_commands(["pip install -e ."])
    .with_tier("m")
    .with_tenant_gce_config(
        project="tenant-project-123",
        zone="us-central1-a",
        network="projects/tenant-project-123/global/networks/default",
        subnet="projects/tenant-project-123/regions/us-central1/subnetworks/default",
        service_account="builder@tenant-project-123.iam.gserviceaccount.com",
    )
)

Individual fields can also be set separately:

cfg = (
    RunnerConfig("my-sandbox")
    .with_tenant_gcp_project("tenant-project-123")
    .with_tenant_gcp_zone("us-central1-a")
)

Two tenants can register the same display name without conflict — the server qualifies the config ID with the tenant project automatically.

Credential Broker Proxy

To route runner traffic through a credential broker proxy, pass proxy_addr and optionally ca_cert_port during allocation:

with CapsuleClient() as client:
    runner = client.runners.allocate_ready(
        "my-workload-key",
        session_id="session-abc",
        proxy_addr="10.0.16.7:3128",
        ca_cert_port=8443,        # default, can be omitted
        tenant_id="tenant-project-123",
    )

This configures the runner to:

  • Route all egress through the proxy (deny-all except proxy IP)
  • Fetch the proxy CA certificate from proxy_addr host on ca_cert_port
  • Use session_id as the basic-auth username in proxy requests

The same parameters are available on allocate(), allocate_ready(), from_config(), and workloads.start() / workloads.allocate().

Key Concepts

SDK concept Server primitive Description
RunnerConfig LayeredConfig Declarative workload shape
workloads.onboard() create + build Register a workload from Python or YAML
workloads.start() allocate + wait Start a ready runner by workload name
runners.allocate_ready() /runners/allocate Allocate and wait for a usable runner
RunnerSession runner handle High-level exec, file, shell, pause, and resume API

RunnerConfig Builder Methods

Method Field Description
with_base_image(img) base_image Docker image URI for layer 0
with_commands(cmds) layers[0].init_commands Shell commands for the main layer
with_layers(layers) layers Full multi-layer definitions
with_tier(tier) config.tier VM size tier (s, m, l)
with_ttl(secs) config.ttl Runner time-to-live in seconds
with_auto_pause(bool) config.auto_pause Auto-pause idle runners
with_auto_rollout(bool) config.auto_rollout Auto-rollout new builds
with_session_max_age(secs) config.session_max_age_seconds Max session age
with_rootfs_size_gb(gb) config.rootfs_size_gb Root filesystem size
with_workspace_size_gb(gb) config.workspace_size_gb Workspace drive size
with_runner_user(user) config.runner_user Non-root user for commands
with_network_policy_preset(p) config.network_policy_preset Named network policy
with_network_policy(policy) config.network_policy Custom network policy JSON
with_start_command(cmd) start_command Long-running service command
with_auth(auth) config.auth Auth/proxy config
with_tenant_gce_config(...) config.tenant_* All tenant GCE fields at once
with_tenant_gcp_project(p) config.tenant_gcp_project Tenant GCP project for builds
with_tenant_gcp_zone(z) config.tenant_gcp_zone Tenant GCE zone
with_tenant_network(n) config.tenant_network Tenant VPC network
with_tenant_subnet(s) config.tenant_subnet Tenant VPC subnet
with_tenant_service_account(sa) config.tenant_service_account Tenant builder SA

Retry And Timeout Behavior

  • request_timeout applies to a single HTTP request
  • startup_timeout covers "get me a usable runner"
  • operation_timeout applies to host-side file, PTY, and stream operations
  • allocate() retries transient control-plane and capacity errors until startup_timeout
  • workloads.start() is the preferred high-level path for named workloads
  • from_config() waits for runner readiness by default; use wait_ready=False for lower-level control

Host Reconnection

The SDK caches host addresses returned by allocate() and connect(). If a host proxy becomes unavailable during a safe retryable operation, the SDK will refresh the host via connect() and retry once when possible.

Live End-To-End Test

The repository includes an explicit live SDK E2E at sdk/python/tests/e2e_live.py. It exercises config registration, build enqueue, allocation, exec, file ops, PTY, pause/resume, release, and config cleanup against a real control plane.

Run it with:

make sdk-python-e2e

If you are not using the default address:

CAPSULE_BASE_URL="http://localhost:8080" make sdk-python-e2e

Development Checks

python -m ruff check src/capsule_sdk/ tests/
python -m pyright src/capsule_sdk/
python -m pytest tests/ -v --ignore=tests/e2e_live.py --ignore=tests/e2e_live_async.py

For contract tests against a live control plane:

CAPSULE_BASE_URL=http://localhost:8080 CAPSULE_TOKEN=test-token \
  python -m pytest tests/test_contract.py -v -m contract

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

capsule_sdk-0.4.2-py3-none-any.whl (50.4 kB view details)

Uploaded Python 3

File details

Details for the file capsule_sdk-0.4.2-py3-none-any.whl.

File metadata

  • Download URL: capsule_sdk-0.4.2-py3-none-any.whl
  • Upload date:
  • Size: 50.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for capsule_sdk-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7e050e1aa8426da24ece5b32c6ef83a4840c6b20b42ef915a7cf4e6c21883514
MD5 bab471983049e6add9e6c3f58f1aeeb9
BLAKE2b-256 9c509e2bbb7eb764820b5a0b8dc4a2fc22713ed203f257fdb0b627cc58943961

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page