Skip to main content

The core component of the Cyberwave Edge Node

Project description

Cyberwave logo

Cyberwave Edge Core

This module is part of Cyberwave: Making the physical world programmable.

Cyberwave Edge Core acts as the orchestrator of Cyberwave edge drivers.

License Documentation Discord PyPI version PyPI Python versions Release to PyPI

Quickstart

SSH to the edge device where you want to install Edge Core, then install the Cyberwave CLI and run the installer:

# Install the Cyberwave CLI (one-time setup)
curl -fsSL https://cyberwave.com/install.sh | bash

# Run the edge installer (interactive)
sudo cyberwave edge install

The installer will prompt you to log in with your Cyberwave account, select a workspace and environment, and persist configuration under /etc/cyberwave/ (on Linux) or ~/.cyberwave/ (on macOS). You can override the config directory via the CYBERWAVE_EDGE_CONFIG_DIR environment variable.

Don't have a Cyberwave account? Get one at cyberwave.com

Config files created

The installer and Edge Core create these files in the config directory:

File Description
credentials.json API token and workspace information
fingerprint.json Device fingerprint (generated by Edge Core)
environment.json Selected environment and twin UUIDs

Edge Core requires credentials.json to operate. fingerprint.json is produced by Edge Core; environment.json is written by the CLI during setup.

How Edge Core works

On startup (service or direct run), Edge Core performs the following steps:

  1. Validate credentials from credentials.json.
  2. Connect to the backend MQTT broker and verify connectivity.
  3. Register the edge device and record a unique edge_fingerprint.
  4. Download the selected environment and resolve twins linked to the fingerprint.
  5. Start drivers for linked twins. Special handling for attached camera child twins:
    • If a twin is a camera child (has attach_to_twin_uuid), Edge Core does not start a separate driver for it.
    • Camera child UUIDs are passed to the parent driver via CYBERWAVE_CHILD_TWIN_UUIDS.
  6. Start the worker container (if worker files exist in {config_dir}/workers/).

During driver startup, Docker image pull progress is mirrored into the edge-core service logs and forwarded through the same MQTT-backed driver log stream used for runtime container logs, so users can follow image download progress remotely.

Remote restart (Edge REST API)

Request a remote restart of Edge Core via the REST API:

POST /api/v1/edges/{uuid}/restart-core

The API will publish an MQTT message to the edge's command topic:

Topic: edges/{edge_uuid}/command

Example payload:

{ "command": "restart_edge_core" }

When Edge Core receives this command it performs a graceful restart consisting of:

  1. Stopping the worker container (if running).
  2. Removing cached twin JSON files from the edge config directory.
  3. Stopping and removing any edge-managed driver containers, then pruning stopped containers.
  4. Re-downloading the selected environment and restarting drivers.
  5. Restarting the worker container (if worker files exist).

The restart is intended to preserve durable state where possible. If connectivity is available before shutdown, Edge Core will attempt to sync any twin JSON changes back to the backend.

Model Manager (ML model cache)

Edge Core includes a ModelManager that pre-downloads ML model weights from the Cyberwave catalog API into a local cache before starting the worker container.

Cache location:

Platform Default path
Linux /etc/cyberwave/models/
macOS ~/.cyberwave/models/

Override with CYBERWAVE_EDGE_CONFIG_DIR.

Cache layout:

<cache_dir>/
├── manifest.json            # index of all cached models
├── yolov8n/
│   ├── yolov8n.pt           # weight file
│   └── metadata.json        # checksum, runtime, download URL
└── background-subtraction/
    └── ...

Model requirements discovery: Edge Core scans *.py files in ~/.cyberwave/workers/ for cw.models.load(...) calls to determine which weights to pre-download.

Cache integrity: SHA-256 checksums are verified on every cache hit. A checksum mismatch or missing file triggers an automatic re-download.

Worker container

Edge Core manages one ML worker container per edge device (container name: cyberwave-worker-{env_uuid[:8]}). The worker container runs Python worker scripts from the local workers directory and has access to cached model weights.

Worker directory layout

Place worker scripts in {config_dir}/workers/ (default: /etc/cyberwave/workers/ on Linux):

/etc/cyberwave/
├── workers/
│   ├── detect_people.py        # Custom worker
│   └── cyberwave.yml           # Optional: list model requirements
└── models/                     # Auto-managed model cache
    ├── manifest.json
    └── yolov8n/
        └── yolov8n.pt

cyberwave.yml

Optionally declare model requirements so Edge Core can pre-download them before starting the worker container:

models:
  - yolov8n
  - background-subtraction

Edge Core also auto-detects models by scanning cw.models.load("...") calls in worker Python files.

Worker container environment variables

Variable Value
CYBERWAVE_API_KEY Injected from credentials
CYBERWAVE_ENVIRONMENT_UUID Active environment UUID
CYBERWAVE_TWIN_UUIDS Comma-separated twin UUIDs in environment
CYBERWAVE_DATA_BACKEND zenoh
ZENOH_CONNECT Set when a Zenoh router is configured
ZENOH_SHM_ENABLED true on Linux (shared memory transport)

File watching and hot-reload

Edge Core monitors {config_dir}/workers/ every reconcile cycle (~15 seconds). When .py files are added, removed, or modified, Edge Core automatically:

  1. Re-scans model requirements.
  2. Pre-downloads any missing models.
  3. Restarts the worker container with the updated files.

A minimum cool-down of 10 seconds between successive automatic restarts prevents rapid churn when files are written incrementally (e.g. by rsync or scp).

Worker health monitoring

Edge Core continuously monitors the worker container for spontaneous exits and crash loops:

  • Restart accounting: every restart is recorded with a timestamp and reason.
  • Sliding-window rate limiting: if more than 5 restarts occur within 5 minutes, the circuit-breaker trips and automatic restarts are suppressed. The breaker resets automatically once the window clears.
  • Spontaneous exit detection: if the container exits without a deliberate restart, a warning is logged so operators can investigate.

Use cyberwave-edge-core worker health to inspect the full restart history and circuit-breaker state.

Resource limits

You can constrain the worker container's CPU and memory usage by setting CYBERWAVE_WORKER_CPU_QUOTA_PERCENT and CYBERWAVE_WORKER_MEMORY_MB environment variables on the edge host (both optional). When set, Edge Core passes the corresponding --cpu-quota, --cpu-period, and --memory flags to docker run.

GPU memory fraction can be limited via CYBERWAVE_GPU_MEM_FRACTION (a float between 0 and 1); this is passed as an env var into the worker container.

GPU support

When an NVIDIA container runtime is detected (docker info reports nvidia runtime), Edge Core adds --gpus all to the worker container's docker run command.

Writing compatible drivers

A Cyberwave driver is a Docker image that interacts with device hardware and the Cyberwave backend. When Edge Core starts a driver container it sets the following environment variables (provided to the container):

  • CYBERWAVE_TWIN_UUID
  • CYBERWAVE_API_KEY
  • CYBERWAVE_TWIN_JSON_FILE (writable file path)
  • CYBERWAVE_CHILD_TWIN_UUIDS (optional, comma-separated)
  • CYBERWAVE_DATA_BACKEND — data transport backend (zenoh by default)
  • ZENOH_SHARED_MEMORYtrue/false; enables zero-copy Zenoh SHM transport
  • ZENOH_CONNECT — (optional) comma-separated Zenoh router endpoint URLs

CYBERWAVE_CHILD_TWIN_UUIDS is present when child camera twins are attached to the driver twin; drivers can use this to coordinate cameras without additional prompts.

Zenoh data bus

Edge Core automatically injects Zenoh transport configuration into every driver container so that drivers using cw.data.publish() work without any extra configuration. The data-bus variables are:

Variable Default Description
CYBERWAVE_DATA_BACKEND zenoh Data transport: zenoh or filesystem
ZENOH_SHARED_MEMORY false Enable Zenoh shared-memory for same-host zero-copy delivery
ZENOH_CONNECT (empty) Router endpoints, e.g. tcp/10.0.0.1:7447

All variables can be overridden per-driver with -e KEY=VALUE in driver params.

Peer-to-peer mode (default): when ZENOH_CONNECT is empty, Zenoh uses multicast discovery. On Linux with --network host (the default), all driver containers on the same machine discover each other automatically.

Router mode (optional): set ZENOH_ROUTER_ENABLED=true to have Edge Core start an eclipse/zenoh:latest router container before the driver containers. This is required for MQTT bridge or multi-hop topologies.

Environment variables for Zenoh infrastructure:

Variable Default Description
ZENOH_ROUTER_ENABLED false Start a Zenoh router container before drivers
ZENOH_ROUTER_IMAGE eclipse/zenoh:latest Docker image for the router
ZENOH_ROUTER_PORT 7447 Host port for the router
ZENOH_SHARED_MEMORY false Enable shared-memory transport (Linux only)

Driver failure handling

Drivers must exit with a non-zero code when they cannot access required hardware (for example, missing /dev/video* or disconnected peripherals). This allows Edge Core to detect startup failures and trigger restart logic.

Edge Core alerts and behavior:

  • driver_start_failure: raised if a driver container cannot reach a stable running state.
  • driver_restart_loop: raised when a driver restarts more than the configured threshold (default 4 restarts within 60 seconds). The container is stopped and marked as flapping.

Optional environment variables to tune restart behavior:

  • CYBERWAVE_DRIVER_RESTART_LOOP_THRESHOLD (default: 4)
  • CYBERWAVE_DRIVER_RESTART_LOOP_WINDOW_SECONDS (default: 60)
  • CYBERWAVE_DRIVER_TROUBLESHOOTING_URL (default: https://docs.cyberwave.com)

Twin JSON file

CYBERWAVE_TWIN_JSON_FILE is an absolute path to a JSON file provided to the driver. The file contains the digital twin instance object (including its metadata) and the associated catalog twin data, matching the API schema: TwinSchema and AssetSchema.

Drivers may modify this file; Edge Core will sync changes back to the backend when connectivity is available.

Twin metadata

Use the official Cyberwave SDK to interact with the API and MQTT; it abstracts authentication, retries, and handshake logic.

Register a driver by adding its configuration to a twin's metadata (or the catalog twin's metadata if you control the catalog twin). Use the environment view's Advanced editing to edit metadata.

Note: changing a catalog twin's metadata affects all subsequently created digital twins derived from that catalog twin.

Example driver metadata (JSON):

{
  "drivers": {
    "default": {
      "docker_image": "cyberwaveos/so101-driver",
      "version": "0.0.1",
      "params": [
        "--network",
        "local",
        "--add-host",
        "host.docker.internal:host-gateway"
      ]
    }
  }
}

Platform-specific driver selection

Edge Core can select platform-specific driver entries before falling back to default.

Selection order:

  1. Child-registry-specific entry (existing behavior)
  2. Host platform/machine keys (for example darwin-arm64, darwin, macos, mac)
  3. default

Example:

{
  "drivers": {
    "default": {
      "docker_image": "cyberwaveos/so101-driver"
    },
    "darwin-arm64": {
      "docker_image": "cyberwaveos/so101-driver:macos",
      "params": ["-e", "CYBERWAVE_SERIAL_BRIDGE_URL=tcp://host.docker.internal:22001"]
    }
  }
}

macOS host-device bridge hook

On macOS, Linux --device mappings in params cannot directly expose host hardware to Linux containers. Edge Core now supports a pre-run native bridge hook:

  • Set CYBERWAVE_MACOS_DEVICE_BRIDGE_COMMAND on the host
  • Edge Core executes it once per --device mapping before docker run
  • Template variables available:
    • {host_device}
    • {container_device}
    • {twin_uuid}
    • {container_name}
    • {config_dir}

Example:

export CYBERWAVE_MACOS_DEVICE_BRIDGE_COMMAND="cyberwave-edge-hw-bridge --device {host_device} --target {container_device} --twin {twin_uuid}"

The command can start native camera/serial forwarding services that expose bridge endpoints to the container (typically via host.docker.internal).

Bridge command stdout can optionally return a resolved source for the mapped device:

  • JSON: {"resolved_device":"rtsp://host.docker.internal:8554/cam0"}
  • or line format: resolved_device=rtsp://host.docker.internal:8554/cam0

When this value differs from /dev/video*, Edge Core can transparently:

  • inject CYBERWAVE_METADATA_VIDEO_DEVICE for the driver
  • inject CYBERWAVE_EDGE_VIDEO_DEVICE_MAP (JSON map of Linux device to resolved source)
  • remove Linux-only --device /dev/video* flags before docker run on macOS (default enabled with CYBERWAVE_MACOS_STRIP_VIDEO_DEVICE_PARAMS=true)

This lets Linux-style drivers keep their normal auto-setup logic while receiving a macOS-compatible video source without driver code changes.

For camera twins, Edge Core can also provide default bridge candidates on macOS even when metadata has no explicit --device params (default driver config), so Linux-oriented camera drivers remain compatible with minimal metadata.

To inject environment variables into a driver container, list -e flags inside params: each -e must be a separate element followed by its KEY=value string. Example:

{
  "drivers": {
    "default": {
      "docker_image": "cyberwaveos/go2-native-driver",
      "params": ["-e", "MY_VAR=value", "-e", "ANOTHER_VAR=value2"]
    }
  }
}

Each -e must be its own element in the array, followed by the KEY=value string as the next element. This is equivalent to passing -e MY_VAR=value on the docker run command line.

This is useful for driver-specific configuration that varies per device, such as IP addresses, credentials, or feature flags that cannot be stored in the twin's edge_configs metadata.

Runtime configuration for drivers (metadata["edge_configs"])

Drivers and edge services should treat metadata["edge_configs"] as the source of truth for per-device runtime configuration. Edge identity should be stored at metadata["edge_fingerprint"] (not duplicated inside edge_configs).

Runtime access: The core passes the full twin JSON (including metadata) to every driver via the CYBERWAVE_TWIN_JSON_FILE environment variable. Drivers can read edge_configs from that file at startup to obtain per-device settings — for example, selecting the right camera source or IP address for the current machine. This is the recommended way to pass device-specific configuration to a driver without hardcoding values in the image.

  • Type: object/dictionary
  • Value: binding object (object)

Canonical shape:

{
  "edge_fingerprint": "macbook-pro-a1b2c3d4e5f6",
  "edge_configs": {
    "camera_config": {
      "camera_id": "front",
      "source": "rtsp://user:pass@192.168.1.20/stream",
      "fps": 10,
      "resolution": "VGA",
      "camera_type": "cv2"
    }
  }
}

Field notes:

  • edge_fingerprint: fingerprint of the edge serving this twin (recommended).
  • camera_config: per-device camera/runtime config consumed by drivers.

Avoid storing transient runtime state such as edge_uuid, registered_at, last_sync, last_ip_address, or status_data inside edge_configs.

Backward compatibility:

  • Older records may use a legacy map shape (edge_configs[fingerprint] = {...}).
  • Older records may store camera settings in cameras[0] or as top-level fields.
  • New writers should prefer camera_config under edge_configs.
  • Do not rely on PUT /api/v1/edges/{uuid}/twins/{twin_uuid}/camera-config; it is deprecated. Update twin metadata instead.

Advanced usage

Manual install and troubleshooting

# Install the Buildkite package signing key
curl -fsSL "https://packages.buildkite.com/cyberwave/cyberwave-edge-core/gpgkey" | gpg --dearmor -o /etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg

# Configure the Apt source
echo -e "deb [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-edge-core/any/ any main\ndeb-src [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-edge-core/any/ any main" \
  > /etc/apt/sources.list.d/buildkite-cyberwave-cyberwave-edge-core.list

# Run Edge Core (performs startup checks and starts drivers + worker container)
cyberwave-edge-core

# Show status, credentials and MQTT connectivity (read-only)
cyberwave-edge-core status

# Show version
cyberwave-edge-core --version

# Worker container management
cyberwave-edge-core worker start      # Start the worker container
cyberwave-edge-core worker stop       # Stop the worker container
cyberwave-edge-core worker restart    # Restart the worker container
cyberwave-edge-core worker status     # Show container state, workers, cached models, and health
cyberwave-edge-core worker health     # Show detailed restart history and circuit-breaker state
cyberwave-edge-core worker logs       # Stream worker container logs
cyberwave-edge-core worker logs --no-follow  # Print recent logs without following

Preview builds from dev / staging CI are published as separate Debian packages in the same apt repo: cyberwave-edge-core-dev and cyberwave-edge-core-staging. apt install cyberwave-edge-core only pulls tagged releases; use one of the channel packages explicitly when you want those binaries (the packages conflict because they ship the same /usr/bin/cyberwave-edge-core).

Environment variables

Run against a different environment/base URL:

export CYBERWAVE_ENVIRONMENT="yourenv"
export CYBERWAVE_BASE_URL="https://yourbaseurl"
cyberwave-edge-core

Control log verbosity (default: INFO):

export CYBERWAVE_EDGE_LOG_LEVEL="DEBUG"
cyberwave-edge-core

Or pass env vars to the CLI installer:

sudo CYBERWAVE_ENVIRONMENT="yourenv" CYBERWAVE_BASE_URL="https://yourbaseurl" CYBERWAVE_MQTT_HOST="yourmqtt" cyberwave edge install

Local development (from this folder)

You can develop both the Cyberwave CLI and Edge Core from the cyberwave-edge-core directory using a single virtual environment that has the monorepo SDK, CLI, and edge-core installed in editable mode.

One-time setup

From cyberwave-edge-core/:

# Create and activate a venv (e.g. .venv in this folder)
python3 -m venv .venv
source .venv/bin/activate   # Windows: .venv\Scripts\activate

# Install SDK, CLI, and Edge Core in editable mode (order matters: SDK first)
pip install -e ../cyberwave-sdks/cyberwave-python
pip install -e ../cyberwave-clis/cyberwave-python-cli/"[build]"
pip install -e ".[build]"

Generate the SDK REST client (required for editable SDK). The SDK’s cyberwave.rest package is generated from the backend OpenAPI spec and is not committed. If you see ImportError: cannot import name 'DefaultApi' from 'cyberwave.rest':

  1. Start the backend: cd ../cyberwave-backend && docker compose -f local.yml up -d (wait until healthy).
  2. From the repo root, generate the REST client:
    cd cyberwave-sdks && ./python-sdk-gen.sh sdk --host localhost:8000
    
  3. Re-run the pip install -e steps above if you already installed; the editable SDK will then include the generated cyberwave/rest code.

Run CLI and Edge Core

After activating the venv, both commands are on your PATH:

# CLI
cyberwave --help
cyberwave login --email boss@cyberwave.com --password iamnottheboss
cyberwave edge install --help

# Edge Core
cyberwave-edge-core --help
cyberwave-edge-core status
cyberwave-edge-core

Target backend: If you do not set CYBERWAVE_BASE_URL, the CLI and Edge Core use the default production API (https://api.cyberwave.com). To use your local backend instead:

export CYBERWAVE_BASE_URL=http://localhost:8000
export CYBERWAVE_MQTT_HOST=localhost
export CYBERWAVE_ENVIRONMENT=local

Paths from this folder

What Path (from cyberwave-edge-core/)
Repo root ..
Python SDK ../cyberwave-sdks/cyberwave-python
CLI ../cyberwave-clis/cyberwave-python-cli

Edit code in any of those directories; the editable installs pick up changes (no reinstall needed for Python changes).

Contributing

Contributions are welcome. Please open an issue to discuss bugs or feature requests, and submit a pull request when you are ready.

Community and Documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cyberwave_edge_core-0.1.0.tar.gz (113.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cyberwave_edge_core-0.1.0-py3-none-any.whl (72.5 kB view details)

Uploaded Python 3

File details

Details for the file cyberwave_edge_core-0.1.0.tar.gz.

File metadata

  • Download URL: cyberwave_edge_core-0.1.0.tar.gz
  • Upload date:
  • Size: 113.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for cyberwave_edge_core-0.1.0.tar.gz
Algorithm Hash digest
SHA256 ea54fe28ba89fed6fe39d3a5037f44fba14c2c9bd5dbebfd334588c2b99db9f3
MD5 feff975fd525f1b175384a700f7b7a59
BLAKE2b-256 01c423c16b2d49577358d2ef2732aaaee145fb951e27ebf9436c8eadd226785b

See more details on using hashes here.

Provenance

The following attestation bundles were made for cyberwave_edge_core-0.1.0.tar.gz:

Publisher: release-pypi.yml on cyberwave-os/cyberwave-edge-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cyberwave_edge_core-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for cyberwave_edge_core-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 39215a0014594b27d840509c8d88bc799d8b72c22b68cca2a4fa6b59e116060c
MD5 2049dd65d3f67f06aff4c2baaa839aba
BLAKE2b-256 56c9de810eba0530f11fbbaad4834ffa0f4e3d9d99a239e82dcd3275751e7cfc

See more details on using hashes here.

Provenance

The following attestation bundles were made for cyberwave_edge_core-0.1.0-py3-none-any.whl:

Publisher: release-pypi.yml on cyberwave-os/cyberwave-edge-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page