Skip to main content

Local pull-based worker daemon for CueAPI — executes scheduled tasks without a public URL

Project description

cueapi-worker

Local pull-based worker daemon for CueAPI. Polls for worker-transport executions, runs the matched shell handler, and auto-reports the outcome. No public URL required.

Install

pip install cueapi-worker
cueapi-worker --help

Configure

Copy example/cueapi-worker.yaml to somewhere writable and customize:

cp example/cueapi-worker.yaml ~/.config/cueapi/cueapi-worker.yaml
cueapi-worker start --config ~/.config/cueapi/cueapi-worker.yaml

API key resolution order:

  1. api_key: in the YAML config
  2. CUEAPI_API_KEY environment variable
  3. ~/.config/cueapi/credentials.json (populated by cueapi login)

Install as a background service

# macOS (launchd)
cueapi-worker install-service --config ~/.config/cueapi/cueapi-worker.yaml

# Linux (systemd user unit)
cueapi-worker install-service --config ~/.config/cueapi/cueapi-worker.yaml

Handler environment

Every handler command runs as a subprocess with these variables injected:

Variable Source
CUEAPI_EXECUTION_ID the execution being handled
CUEAPI_CUE_ID the parent cue
CUEAPI_CUE_NAME human-readable cue name
CUEAPI_WORKER_ID this worker's id
CUEAPI_PAYLOAD the full cue payload as a JSON string
CUEAPI_API_KEY the same API key the worker is authenticated with
CUEAPI_BASE_URL the CueAPI base URL (defaults to https://api.cueapi.ai)
CUEAPI_OUTCOME_FILE path to a per-run temp file the handler may write JSON to (see "Reporting evidence from handlers" below)

Handler-level env: entries from the YAML are layered on top and support {{ payload.field }} template resolution (nested dot notation works).

Chain pattern

A handler can fire a follow-up cue without duplicating the API key in its own config. The worker's key is already on the handler's environment as $CUEAPI_API_KEY:

handlers:
  ingest_then_summarize:
    cmd: |
      set -e
      commit_sha=$(ingest_and_commit)
      FIRE_AT=$(date -u -v+5M '+%Y-%m-%dT%H:%M:%SZ' 2>/dev/null || \
                date -u -d '+5 minutes' '+%Y-%m-%dT%H:%M:%SZ')
      curl -sf -X POST "$CUEAPI_BASE_URL/v1/cues" \
        -H "Authorization: Bearer $CUEAPI_API_KEY" \
        -H "Content-Type: application/json" \
        -d "{
          \"name\": \"summarize-$commit_sha\",
          \"schedule\": {\"type\": \"once\", \"at\": \"$FIRE_AT\"},
          \"transport\": \"worker\",
          \"payload\": {\"task\": \"summarize\", \"commit_sha\": \"$commit_sha\"}
        }"

Outcome reporting

The daemon reports the outcome automatically based on the handler's exit code — success on exit 0, failure otherwise. Handler stdout+stderr (capped at 4 KB) is recorded as result on success or error on failure. Handlers do not need to call the outcome endpoint themselves.

Reporting evidence from handlers

A handler MAY write a JSON object to the CUEAPI_OUTCOME_FILE path before exiting to:

  • attach evidence fields (external_id, result_url, result_type, summary, artifacts, result_ref, metadata), which lets worker-transport cues satisfy verification modes like require_external_id and require_result_url;
  • or override the exit-code-derived success value when the handler knows more than the shell (e.g., the subprocess exited 0 but the downstream API returned 503, so the logical outcome is failure).

The file is optional. If a handler does nothing with it, behavior is exactly as before: exit code decides.

Example — git commit handler that proves the commit landed:

handlers:
  git_commit_summary:
    cmd: |
      set -e
      cd /path/to/repo
      git add summaries/yesterday.md
      git commit -m "daily summary $(date +%Y-%m-%d)"
      SHA=$(git rev-parse HEAD)
      git push
      cat > "$CUEAPI_OUTCOME_FILE" <<EOF
      {
        "success": true,
        "external_id": "$SHA",
        "result_url": "https://github.com/you/repo/commit/$SHA",
        "result_type": "git_commit",
        "summary": "Committed daily summary"
      }
      EOF
    timeout: 120

Schema

All fields are optional. The file is parsed as UTF-8 JSON; anything that doesn't match the schema below is dropped individually (the rest of the file still takes effect).

Field Type Limit
success bool
result string ≤ 2000 chars
error string ≤ 2000 chars
external_id string ≤ 500 chars
result_url string ≤ 2000 chars, must start with http:// or https://
result_ref string ≤ 500 chars
result_type string ≤ 100 chars
summary string ≤ 500 chars
artifacts list
metadata dict

File size cap: 10 KB. Files over that are rejected and the run falls back to exit-code-only behavior (with a diagnostic breadcrumb in reported metadata).

Conflict resolution

Exit code File Decision
0 absent / empty success, no evidence
≠ 0 absent / empty failure, error = stdout/stderr
0 success: true (or omitted) success, evidence merged
0 success: false failure — FILE WINS (handler knows more than the shell), error from file if given
≠ 0 success: true failure — EXIT CODE WINS (the process crashed; the file record is stale). Evidence is still attached — a crashing process may legitimately have produced real side effects worth recording.
any malformed JSON / too large / bad encoding fall back to exit-code-only, attach _cueapi_worker.outcome_file_parse_error to metadata

Dropped fields are surfaced under metadata._cueapi_worker.outcome_file_dropped_fields for debugging without breaking the run.

The temp file is always deleted after the handler returns — even on timeout, exception, or crash.

Security / trust model

Handlers run with the same privileges the worker has on the CueAPI account. CUEAPI_API_KEY is exposed to the handler subprocess so it can fire follow-up cues, list executions, or otherwise call the API without re-reading config. If a handler command — or a binary or dependency it shells out to — is compromised, the API key is compromised.

There are no per-handler scoped tokens today. Decide which handlers to run on what machine with that in mind; don't run untrusted handlers under a worker wired to a production key.

Troubleshooting

  • cueapi-worker start warns "No launchd service detected" — you ran the daemon from a terminal. It will stop when the terminal closes. Run cueapi-worker install-service to register it as a launchd (macOS) or systemd user unit (Linux).
  • 401s in logs — the API key has been rotated or revoked. If your config has email: you@example.com, the daemon will auto-recover via magic-link flow. Otherwise run cueapi-worker login --email <email> --config <path> or cueapi-worker regenerate-key --config <path>.

License

Apache 2.0 © Vector Apps Inc.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cueapi_worker-0.3.0.tar.gz (37.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cueapi_worker-0.3.0-py3-none-any.whl (26.7 kB view details)

Uploaded Python 3

File details

Details for the file cueapi_worker-0.3.0.tar.gz.

File metadata

  • Download URL: cueapi_worker-0.3.0.tar.gz
  • Upload date:
  • Size: 37.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for cueapi_worker-0.3.0.tar.gz
Algorithm Hash digest
SHA256 ed523efa278411ccca24d02b573e6e5a7874ff6603bc1b681f5a27f593999662
MD5 3c9c4ea1cc00d22881a311adadd4e558
BLAKE2b-256 1877b2592f8128cca87f81aac090b02b506225aabd41ceb10d0ab04a27d40bfd

See more details on using hashes here.

File details

Details for the file cueapi_worker-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: cueapi_worker-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 26.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for cueapi_worker-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6da07b567cdafce014f17466363153034943e4913917476adbdab3e5b939c59e
MD5 91b84c3baa0401d23c24bada8de7a06b
BLAKE2b-256 8de246e49dc28202e4f6145ff614c47d317bf14b512a8e530cad7ad0e15e76c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page