Skip to main content

Distributed execution backend for Pyoco using NATS

Project description

pyoco-server (NATS backend for Pyoco)

pyoco-server is a lightweight distributed execution backend for Pyoco. It is optimized for internal systems (single organization, small ops team), not for a large strict multi-tenant platform.

Version: 0.5.1

Positioning

  • Role: distributed execution backend (HTTP Gateway + worker + NATS JetStream).
  • Optimization target: internal platform operations by one team.
  • Non-goal: strict multi-tenant isolation platform with strong audit separation.

Value proposition with nats-bootstrap

pyoco-server focuses on execution, while nats-bootstrap handles practical NATS operations with the same CLI family across local and cluster workflows.

  • Symmetric operations: up / join / status / doctor / backup / restore / leave / down / service are provided by nats-bootstrap.
  • Day-2 operations: lifecycle and recovery are scriptable from CLI.
  • Complexity level: designed to be operable by a small team.

Evidence (commands):

uv run nats-bootstrap --help
uv run nats-bootstrap up --help
uv run nats-bootstrap join --help
uv run nats-bootstrap status --help
uv run nats-bootstrap doctor --help
uv run nats-bootstrap backup --help
uv run nats-bootstrap restore --help
uv run nats-bootstrap leave --help
uv run nats-bootstrap down --help
uv run nats-bootstrap service --help

Fit / Non-fit

Fit:

  • One team operating one internal environment.
  • Internal platform where users submit jobs via HTTP.
  • Docker-centric deployment with NATS JetStream.
  • "Start small, then operate" use cases needing queue + latest status + basic ops.

Non-fit:

  • Strict multi-tenant boundary requirements (hard isolation per tenant).
  • Strong audit/compliance separation across organizations.
  • Very large-scale SLO/SLA platforms requiring advanced fairness and isolation.

Quickstart (shortest path)

For the shortest local path (single-node NATS managed together):

uv sync
uv run pyoco-server up --with-nats-bootstrap --host 127.0.0.1 --port 8000 --dashboard-lang auto
uv run pyoco-worker --nats-url nats://127.0.0.1:4222 --tags hello --worker-id w1

Then submit a YAML workflow:

cat > flow.yaml <<'YAML'
version: 1
flow:
  graph: |
    add_one >> to_text
  defaults:
    x: 1
tasks:
  add_one:
    callable: pyoco_server._workflow_test_tasks:add_one
  to_text:
    callable: pyoco_server._workflow_test_tasks:to_text
YAML

uv run pyoco-client --server http://127.0.0.1:8000 submit-yaml --workflow-file flow.yaml --flow-name main --tag hello
uv run pyoco-client --server http://127.0.0.1:8000 watch <run_id> --until-terminal --output status

For single-node or 3-node cluster startup details, see docs/quickstart.md.

Operational constraints (current behavior)

These are current constraints aligned with implemented behavior and nats-bootstrap 0.0.9 CLI/runtime behavior.

  1. backup / restore require nats CLI.
  • nats-bootstrap backup --help and restore --help expose --nats-cli-path.
  • If nats CLI cannot be resolved, command fails (nats cli not found).
  1. leave / controller are MVP-scoped.
  • leave requires --confirm and controller endpoint(s).
  • --controller must point to the endpoint started by nats-bootstrap controller start (not the NATS monitor port).
  • --stop-anyway allows success when controller is unavailable, but local stop is skipped in MVP behavior.
  • controller currently provides start operation.
  1. down depends on PID file in current directory.
  • down requires --confirm and ./nats-server.pid.
  • If PID file is missing/invalid, down fails.
  • If you need down, start NATS with PID output, e.g.:
uv run nats-bootstrap up -- -js -a 127.0.0.1 -p 4222 -m 8222 -P nats-server.pid
uv run nats-bootstrap down --confirm

Current state

This repo is under active construction. See docs/concept.md and docs/spec.md for current behavior and contracts.

CLI commands

  • pyoco-server: HTTP Gateway launcher
  • pyoco-worker: worker launcher
  • pyoco-client: HTTP client CLI (submit/get/list/watch/tasks/workers/metrics/wheels/wheel-history/wheel-upload/wheel-delete)
  • pyoco-server-admin: API key management CLI

YAML-first run (recommended)

.env is loaded automatically by NatsBackendConfig.from_env() (default file: .env). You can disable it with PYOCO_LOAD_DOTENV=0 or change the file path with PYOCO_ENV_FILE.

export PYOCO_NATS_URL="nats://127.0.0.1:4222"
uv run pyoco-server up --host 127.0.0.1 --port 8000 --dashboard-lang auto

Docs

  • Japanese README: README_ja.md
  • Concept: docs/concept.md
  • Spec (contract): docs/spec.md
  • Architecture: docs/architecture.md
  • Quickstart: docs/quickstart.md
  • Library API (Python): docs/library_api.md
  • Config (.env): docs/config.md
  • Roadmap: docs/plan.md

Development

Prerequisites:

  • Python 3.10+
  • uv

Install dependencies:

uv sync

Run tests (will start an ephemeral NATS server for integration tests):

uv run pytest

Tag routing

Runs are routed by subject:

  • publish to pyoco.work.<tag>
  • workers pull from one or more tags (OR semantics)

Wheel registry (optional)

pyoco-server exposes a wheel registry on /wheels backed by JetStream Object Store. Workers can opt in to sync and install wheels automatically before processing jobs. Workers download wheels when their worker tags intersect with wheel tags. Wheels without tags are treated as shared for all workers. Uploads must be a strict version bump per package (same/older version returns HTTP 409). Wheel upload/delete operations are recorded as history with request source metadata. Sync happens at worker startup and before the next polling cycle. Workers do not start wheel updates in the middle of an active run. When multiple versions exist, workers sync/install only the latest version per package.

export PYOCO_WHEEL_SYNC_ENABLED=1
uv run pyoco-worker --nats-url nats://127.0.0.1:4222 --tags cpu --worker-id w-cpu --wheel-sync
uv run pyoco-client --server http://127.0.0.1:8000 wheel-upload --wheel-file dist/my_ext-0.1.0-py3-none-any.whl --tags cpu,linux
uv run pyoco-client --server http://127.0.0.1:8000 wheels
uv run pyoco-client --server http://127.0.0.1:8000 wheel-history --limit 20

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyoco_server-0.5.1.tar.gz (192.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyoco_server-0.5.1-py3-none-any.whl (74.4 kB view details)

Uploaded Python 3

File details

Details for the file pyoco_server-0.5.1.tar.gz.

File metadata

  • Download URL: pyoco_server-0.5.1.tar.gz
  • Upload date:
  • Size: 192.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for pyoco_server-0.5.1.tar.gz
Algorithm Hash digest
SHA256 866e20cbde3674a2ce2d2b059ae98f1e99aa66cb9c64945b4a5d6765ac75e2da
MD5 81306ca85db190929af200d028e276a4
BLAKE2b-256 912f11d6bcae4474a97ce901a750ed33573800b6c66795290af653ef5b9cb83d

See more details on using hashes here.

File details

Details for the file pyoco_server-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: pyoco_server-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 74.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for pyoco_server-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9190bee9747dad1c36c48a29f6c7af44659cf77035eaefa299ad1e62374ca42c
MD5 90743946a99ab0e05ce82b6fd506a5ab
BLAKE2b-256 693713e492094c182847c91ea1fcc3d3001ff71ec758e619b16c8ff278a1bee9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page