Skip to main content

The core component of the Cyberwave Edge Node

Project description

Cyberwave logo

Cyberwave Edge Core

This module is part of Cyberwave: Making the physical world programmable.

Cyberwave Edge Core acts as the orchestrator of Cyberwave edge drivers.

License Documentation Discord PyPI version PyPI Python versions Release to PyPI

Quickstart

SSH to the edge device where you want to install Edge Core, then install the Cyberwave CLI and run the installer:

# Install the Cyberwave CLI (one-time setup)
curl -fsSL https://cyberwave.com/install.sh | bash

# Run the edge installer (interactive)
sudo cyberwave edge install

The installer will prompt you to log in with your Cyberwave account, select a workspace and environment, and persist configuration under /etc/cyberwave/ (on Linux) or ~/.cyberwave/ (on macOS). You can override the config directory via the CYBERWAVE_EDGE_CONFIG_DIR environment variable.

Don't have a Cyberwave account? Get one at cyberwave.com

Config files created

The installer and Edge Core create these files in the config directory:

File Description
credentials.json API token and workspace information
fingerprint.json Device fingerprint (generated by Edge Core)
environment.json Selected environment and twin UUIDs

Edge Core requires credentials.json to operate. fingerprint.json is produced by Edge Core; environment.json is written by the CLI during setup.

How Edge Core works

On startup (service or direct run), Edge Core performs the following steps:

  1. Validate credentials from credentials.json.
  2. Connect to the backend MQTT broker and verify connectivity.
  3. Register the edge device and record a unique edge_fingerprint.
  4. Download the selected environment and resolve twins linked to the fingerprint.
  5. Start drivers for linked twins. Special handling for attached camera child twins:
    • If a twin is a camera child (has attach_to_twin_uuid), Edge Core does not start a separate driver for it.
    • Camera child UUIDs are passed to the parent driver via CYBERWAVE_CHILD_TWIN_UUIDS.

During driver startup, Docker image pull progress is mirrored into the edge-core service logs and forwarded through the same MQTT-backed driver log stream used for runtime container logs, so users can follow image download progress remotely.

Remote restart (Edge REST API)

Request a remote restart of Edge Core via the REST API:

POST /api/v1/edges/{uuid}/restart-core

The API will publish an MQTT message to the edge's command topic:

Topic: edges/{edge_uuid}/command

Example payload:

{ "command": "restart_edge_core" }

When Edge Core receives this command it performs a graceful restart consisting of:

  1. Removing cached twin JSON files from the edge config directory.
  2. Stopping and removing any edge-managed driver containers, then pruning stopped containers.
  3. Re-downloading the selected environment and restarting drivers.

The restart is intended to preserve durable state where possible. If connectivity is available before shutdown, Edge Core will attempt to sync any twin JSON changes back to the backend.

Writing compatible drivers

A Cyberwave driver is a Docker image that interacts with device hardware and the Cyberwave backend. When Edge Core starts a driver container it sets the following environment variables (provided to the container):

  • CYBERWAVE_TWIN_UUID
  • CYBERWAVE_API_KEY
  • CYBERWAVE_TWIN_JSON_FILE (writable file path)
  • CYBERWAVE_CHILD_TWIN_UUIDS (optional, comma-separated)

CYBERWAVE_CHILD_TWIN_UUIDS is present when child camera twins are attached to the driver twin; drivers can use this to coordinate cameras without additional prompts.

Driver failure handling

Drivers must exit with a non-zero code when they cannot access required hardware (for example, missing /dev/video* or disconnected peripherals). This allows Edge Core to detect startup failures and trigger restart logic.

Edge Core alerts and behavior:

  • driver_start_failure: raised if a driver container cannot reach a stable running state.
  • driver_restart_loop: raised when a driver restarts more than the configured threshold (default 4 restarts within 60 seconds). The container is stopped and marked as flapping.

Optional environment variables to tune restart behavior:

  • CYBERWAVE_DRIVER_RESTART_LOOP_THRESHOLD (default: 4)
  • CYBERWAVE_DRIVER_RESTART_LOOP_WINDOW_SECONDS (default: 60)
  • CYBERWAVE_DRIVER_TROUBLESHOOTING_URL (default: https://docs.cyberwave.com)

Twin JSON file

CYBERWAVE_TWIN_JSON_FILE is an absolute path to a JSON file provided to the driver. The file contains the digital twin instance object (including its metadata) and the associated catalog twin data, matching the API schema: TwinSchema and AssetSchema.

Drivers may modify this file; Edge Core will sync changes back to the backend when connectivity is available.

Twin metadata

Use the official Cyberwave SDK to interact with the API and MQTT; it abstracts authentication, retries, and handshake logic.

Register a driver by adding its configuration to a twin's metadata (or the catalog twin's metadata if you control the catalog twin). Use the environment view's Advanced editing to edit metadata.

Note: changing a catalog twin's metadata affects all subsequently created digital twins derived from that catalog twin.

Example driver metadata (JSON):

{
  "drivers": {
    "default": {
      "docker_image": "cyberwaveos/so101-driver",
      "version": "0.0.1",
      "params": [
        "--network",
        "local",
        "--add-host",
        "host.docker.internal:host-gateway"
      ]
    }
  }
}

Platform-specific driver selection

Edge Core can select platform-specific driver entries before falling back to default.

Selection order:

  1. Child-registry-specific entry (existing behavior)
  2. Host platform/machine keys (for example darwin-arm64, darwin, macos, mac)
  3. default

Example:

{
  "drivers": {
    "default": {
      "docker_image": "cyberwaveos/so101-driver"
    },
    "darwin-arm64": {
      "docker_image": "cyberwaveos/so101-driver:macos",
      "params": ["-e", "CYBERWAVE_SERIAL_BRIDGE_URL=tcp://host.docker.internal:22001"]
    }
  }
}

macOS host-device bridge hook

On macOS, Linux --device mappings in params cannot directly expose host hardware to Linux containers. Edge Core now supports a pre-run native bridge hook:

  • Set CYBERWAVE_MACOS_DEVICE_BRIDGE_COMMAND on the host
  • Edge Core executes it once per --device mapping before docker run
  • Template variables available:
    • {host_device}
    • {container_device}
    • {twin_uuid}
    • {container_name}
    • {config_dir}

Example:

export CYBERWAVE_MACOS_DEVICE_BRIDGE_COMMAND="cyberwave-edge-hw-bridge --device {host_device} --target {container_device} --twin {twin_uuid}"

The command can start native camera/serial forwarding services that expose bridge endpoints to the container (typically via host.docker.internal).

Bridge command stdout can optionally return a resolved source for the mapped device:

  • JSON: {"resolved_device":"rtsp://host.docker.internal:8554/cam0"}
  • or line format: resolved_device=rtsp://host.docker.internal:8554/cam0

When this value differs from /dev/video*, Edge Core can transparently:

  • inject CYBERWAVE_METADATA_VIDEO_DEVICE for the driver
  • inject CYBERWAVE_EDGE_VIDEO_DEVICE_MAP (JSON map of Linux device to resolved source)
  • remove Linux-only --device /dev/video* flags before docker run on macOS (default enabled with CYBERWAVE_MACOS_STRIP_VIDEO_DEVICE_PARAMS=true)

This lets Linux-style drivers keep their normal auto-setup logic while receiving a macOS-compatible video source without driver code changes.

For camera twins, Edge Core can also provide default bridge candidates on macOS even when metadata has no explicit --device params (default driver config), so Linux-oriented camera drivers remain compatible with minimal metadata.

To inject environment variables into a driver container, list -e flags inside params: each -e must be a separate element followed by its KEY=value string. Example:

{
  "drivers": {
    "default": {
      "docker_image": "cyberwaveos/go2-native-driver",
      "params": ["-e", "MY_VAR=value", "-e", "ANOTHER_VAR=value2"]
    }
  }
}

Each -e must be its own element in the array, followed by the KEY=value string as the next element. This is equivalent to passing -e MY_VAR=value on the docker run command line.

This is useful for driver-specific configuration that varies per device, such as IP addresses, credentials, or feature flags that cannot be stored in the twin's edge_configs metadata.

Runtime configuration for drivers (metadata["edge_configs"])

Drivers and edge services should treat metadata["edge_configs"] as the source of truth for per-device runtime configuration. Edge identity should be stored at metadata["edge_fingerprint"] (not duplicated inside edge_configs).

Runtime access: The core passes the full twin JSON (including metadata) to every driver via the CYBERWAVE_TWIN_JSON_FILE environment variable. Drivers can read edge_configs from that file at startup to obtain per-device settings — for example, selecting the right camera source or IP address for the current machine. This is the recommended way to pass device-specific configuration to a driver without hardcoding values in the image.

  • Type: object/dictionary
  • Value: binding object (object)

Canonical shape:

{
  "edge_fingerprint": "macbook-pro-a1b2c3d4e5f6",
  "edge_configs": {
    "camera_config": {
      "camera_id": "front",
      "source": "rtsp://user:pass@192.168.1.20/stream",
      "fps": 10,
      "resolution": "VGA",
      "camera_type": "cv2"
    }
  }
}

Field notes:

  • edge_fingerprint: fingerprint of the edge serving this twin (recommended).
  • camera_config: per-device camera/runtime config consumed by drivers.

Avoid storing transient runtime state such as edge_uuid, registered_at, last_sync, last_ip_address, or status_data inside edge_configs.

Backward compatibility:

  • Older records may use a legacy map shape (edge_configs[fingerprint] = {...}).
  • Older records may store camera settings in cameras[0] or as top-level fields.
  • New writers should prefer camera_config under edge_configs.
  • Do not rely on PUT /api/v1/edges/{uuid}/twins/{twin_uuid}/camera-config; it is deprecated. Update twin metadata instead.

Advanced usage

Manual install and troubleshooting

# Install the Buildkite package signing key
curl -fsSL "https://packages.buildkite.com/cyberwave/cyberwave-edge-core/gpgkey" | gpg --dearmor -o /etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg

# Configure the Apt source
echo -e "deb [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-edge-core/any/ any main\ndeb-src [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-edge-core/any/ any main" \
  > /etc/apt/sources.list.d/buildkite-cyberwave-cyberwave-edge-core.list

# Run Edge Core (performs startup checks and starts drivers)
cyberwave-edge-core

# Show status, credentials and MQTT connectivity (read-only)
cyberwave-edge-core status

# Show version
cyberwave-edge-core --version

Preview builds from dev / staging CI are published as separate Debian packages in the same apt repo: cyberwave-edge-core-dev and cyberwave-edge-core-staging. apt install cyberwave-edge-core only pulls tagged releases; use one of the channel packages explicitly when you want those binaries (the packages conflict because they ship the same /usr/bin/cyberwave-edge-core).

Environment variables

Run against a different environment/base URL:

export CYBERWAVE_ENVIRONMENT="yourenv"
export CYBERWAVE_BASE_URL="https://yourbaseurl"
cyberwave-edge-core

Control log verbosity (default: INFO):

export CYBERWAVE_EDGE_LOG_LEVEL="DEBUG"
cyberwave-edge-core

Or pass env vars to the CLI installer:

sudo CYBERWAVE_ENVIRONMENT="yourenv" CYBERWAVE_BASE_URL="https://yourbaseurl" CYBERWAVE_MQTT_HOST="yourmqtt" cyberwave edge install

Local development (from this folder)

You can develop both the Cyberwave CLI and Edge Core from the cyberwave-edge-core directory using a single virtual environment that has the monorepo SDK, CLI, and edge-core installed in editable mode.

One-time setup

From cyberwave-edge-core/:

# Create and activate a venv (e.g. .venv in this folder)
python3 -m venv .venv
source .venv/bin/activate   # Windows: .venv\Scripts\activate

# Install SDK, CLI, and Edge Core in editable mode (order matters: SDK first)
pip install -e ../cyberwave-sdks/cyberwave-python
pip install -e ../cyberwave-clis/cyberwave-python-cli/"[build]"
pip install -e ".[build]"

Generate the SDK REST client (required for editable SDK). The SDK’s cyberwave.rest package is generated from the backend OpenAPI spec and is not committed. If you see ImportError: cannot import name 'DefaultApi' from 'cyberwave.rest':

  1. Start the backend: cd ../cyberwave-backend && docker compose -f local.yml up -d (wait until healthy).
  2. From the repo root, generate the REST client:
    cd cyberwave-sdks && ./python-sdk-gen.sh sdk --host localhost:8000
    
  3. Re-run the pip install -e steps above if you already installed; the editable SDK will then include the generated cyberwave/rest code.

Run CLI and Edge Core

After activating the venv, both commands are on your PATH:

# CLI
cyberwave --help
cyberwave login --email boss@cyberwave.com --password iamnottheboss
cyberwave edge install --help

# Edge Core
cyberwave-edge-core --help
cyberwave-edge-core status
cyberwave-edge-core

Target backend: If you do not set CYBERWAVE_BASE_URL, the CLI and Edge Core use the default production API (https://api.cyberwave.com). To use your local backend instead:

export CYBERWAVE_BASE_URL=http://localhost:8000
export CYBERWAVE_MQTT_HOST=localhost
export CYBERWAVE_ENVIRONMENT=local

Paths from this folder

What Path (from cyberwave-edge-core/)
Repo root ..
Python SDK ../cyberwave-sdks/cyberwave-python
CLI ../cyberwave-clis/cyberwave-python-cli

Edit code in any of those directories; the editable installs pick up changes (no reinstall needed for Python changes).

Contributing

Contributions are welcome. Please open an issue to discuss bugs or feature requests, and submit a pull request when you are ready.

Community and Documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cyberwave_edge_core-0.0.42.tar.gz (61.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cyberwave_edge_core-0.0.42-py3-none-any.whl (37.6 kB view details)

Uploaded Python 3

File details

Details for the file cyberwave_edge_core-0.0.42.tar.gz.

File metadata

  • Download URL: cyberwave_edge_core-0.0.42.tar.gz
  • Upload date:
  • Size: 61.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cyberwave_edge_core-0.0.42.tar.gz
Algorithm Hash digest
SHA256 98186327a4bc05ff8332bf7237b73e0e0ac773030bd4a2637e57e844084e9359
MD5 c53269e1106639ba9b3dbcc344ddfe40
BLAKE2b-256 a4f38d3e9d607565f14fb66b1930e769a32522ab6543538fdbf06e263349f207

See more details on using hashes here.

Provenance

The following attestation bundles were made for cyberwave_edge_core-0.0.42.tar.gz:

Publisher: release-pypi.yml on cyberwave-os/cyberwave-edge-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cyberwave_edge_core-0.0.42-py3-none-any.whl.

File metadata

File hashes

Hashes for cyberwave_edge_core-0.0.42-py3-none-any.whl
Algorithm Hash digest
SHA256 f0b177491a952b71da0424d7b35a7699c23c6f7db764338f17a414354013747c
MD5 22f4f2c12e9e63840995bfdbfc9ca4dd
BLAKE2b-256 b0ad0e9cfdb92212939859d4ea886364903e874ea36ac89b4253031f4a242d9c

See more details on using hashes here.

Provenance

The following attestation bundles were made for cyberwave_edge_core-0.0.42-py3-none-any.whl:

Publisher: release-pypi.yml on cyberwave-os/cyberwave-edge-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page