The core component of the Cyberwave Edge Node
Project description
Cyberwave Edge Core
This module is part of Cyberwave: Making the physical world programmable.
Cyberwave Edge Core acts as the orchestrator of Cyberwave edge drivers.
Quickstart
SSH to the edge device where you want to install Edge Core, then install the Cyberwave CLI and run the installer:
# Install the Cyberwave CLI (one-time setup)
curl -fsSL https://cyberwave.com/install.sh | bash
# Run the edge installer (interactive)
sudo cyberwave edge install
The installer will prompt you to log in with your Cyberwave account, select a workspace and environment, and persist configuration under ~/.cyberwave/. You can override the config directory via the CYBERWAVE_EDGE_CONFIG_DIR environment variable. Legacy installs that used /etc/cyberwave are automatically migrated.
Permissions on the config directory:
credentials.jsonis written with mode0600(owner-only) because it holds your API token.fingerprint.jsonis written with mode0644(world-readable) because the device fingerprint is a hardware identifier, not a secret. This lets user shells read it even when Edge Core runs as root viasystemd.- On
systemddeployments where Edge Core runs as root, the service re-chowns files under~/.cyberwave/on startup so they stay owned by the user whose home directory holds them.
Don't have a Cyberwave account? Get one at cyberwave.com
Config files created
The installer and Edge Core create these files in the config directory:
| File | Description |
|---|---|
credentials.json |
API token and workspace information |
fingerprint.json |
Device fingerprint (generated by Edge Core) |
environment.json |
Selected environment and twin UUIDs |
Edge Core requires credentials.json to operate. fingerprint.json is produced by Edge Core; environment.json is written by the CLI during setup.
How Edge Core works
On startup (service or direct run), Edge Core performs the following steps:
- Validate credentials from
credentials.json. - Connect to the backend MQTT broker and verify connectivity.
- Start a bootstrap health publisher that sends periodic edge health messages while drivers are starting up.
- Register the edge device and record a unique
edge_fingerprint. - Download the selected environment and resolve twins linked to the fingerprint.
- Start drivers for linked twins. Special handling for attached camera child twins:
- If a twin is a camera child (has
attach_to_twin_uuid), Edge Core does not start a separate driver for it. - Camera child UUIDs are passed to the parent driver via
CYBERWAVE_CHILD_TWIN_UUIDS.
- If a twin is a camera child (has
- Stop the bootstrap health publisher once drivers are running (drivers publish their own health messages; keeping both would produce duplicate signals in the UI).
- Pull workflow workers (
wf_*.py) for the twins listed inenvironment.jsonand start the worker container (if any worker files exist in{config_dir}/workers/).
Scope of workflow worker sync
Workflow worker sync is scoped strictly to the twin UUIDs the operator selected at install time and persisted to environment.json under twin_uuids. This is intentionally narrower than the fingerprint-based discovery used for drivers and the bootstrap health publisher: an environment can carry stale metadata.edge_fingerprint entries from previous installs, and we don't want those to pull unrelated wf_*.py files onto this edge. For backward compatibility, installs that predate the twin_uuids field still fall back to fingerprint-based discovery.
During driver startup, Docker image pull progress is mirrored into the edge-core service logs and forwarded through the same MQTT-backed driver log stream used for runtime container logs, so users can follow image download progress remotely.
Remote restart (Edge REST API)
Request a remote restart of Edge Core via the REST API:
POST /api/v1/edges/{uuid}/restart-core
The API will publish an MQTT message to the edge's command topic:
Topic: edges/{edge_uuid}/command
Example payload:
{ "command": "restart_edge_core" }
When Edge Core receives this command it performs a graceful restart consisting of:
- Stopping the worker container (if running).
- Removing cached twin JSON files from the edge config directory.
- Stopping and removing any edge-managed driver containers, then pruning stopped containers.
- Resolving any active
driver_startingalerts on the affected twins so leftovers from the previous run do not stay visible after the restart. - Re-downloading the selected environment and restarting drivers.
- Restarting the worker container (if worker files exist).
The restart is intended to preserve durable state where possible. If connectivity is available before shutdown, Edge Core will attempt to sync any twin JSON changes back to the backend.
Each driver startup attempt creates a driver_starting twin alert that tracks the in-flight startup (image pull, container launch, post-launch health probe). The alert is automatically resolved once the driver container is observed running, and is annotated and resolved with a failure phase if the attempt fails. The alert is therefore guaranteed to clear when the driver has restarted; longer-lived failure conditions are surfaced through separate driver_start_failure alerts created by the orchestrator.
Model Manager (ML model cache)
Edge Core includes a ModelManager that resolves ML model weights into a local cache before starting the worker container. It is designed for both online deployments and air-gapped sites: it prefers fresh weights from Cyberwave when the network is available, falls back to upstream public mirrors, and finally to whatever is already on disk.
Cache location:
| Platform | Default path |
|---|---|
| All | ~/.cyberwave/models/ |
Override with CYBERWAVE_EDGE_CONFIG_DIR.
Cache layout:
<cache_dir>/
├── manifest.json # index of all cached models
├── yolov8n/
│ ├── yolov8n.pt # weight file
│ └── metadata.json # checksum, runtime, source URL, upstream URL
└── background-subtraction/
└── ...
Model requirements discovery: Edge Core scans *.py files in ~/.cyberwave/workers/ for cw.models.load(...) calls to determine which weights to ensure.
~/.cyberwave/models/ and ~/.cyberwave/workers/ are created eagerly on Edge Core startup (even before any worker runs), with ownership matching the invoking user, so operators can drop pre-staged weights into ~/.cyberwave/models/{model_id}/ from a regular shell.
Resolution order
For each required model, ensure_model(model_id) runs the following steps:
-
Reconcile disk. If
cache_dir/{model_id}/already contains a weight file (with or without a sidecar), it is registered in the manifest. A missing sidecar is generated from the on-disk file (with a freshly computed SHA-256) and taggeddownloaded_from: prestaged. This is how an operator pre-stages weights from a USB stick on an air-gapped site. -
Verify cache integrity. If the local file's SHA-256 matches the manifest checksum, the cache is intact.
-
Pre-staged short-circuit. When an intact entry is tagged
downloaded_from: prestaged, Edge Core returns it without ever contacting the catalog. Pre-staged files are operator-curated truth; to force a re-download, evict the model directory. -
Best-effort catalog probe. For non-prestaged intact entries, Edge Core does a short-timeout
GET /api/v1/mlmodels/...to compare checksums.- Catalog unreachable, no checksum, or matching checksum → return the cached file (no download).
- Catalog returns a different checksum → fall through to the download path.
-
Download. Sources are tried in priority order:
- Cyberwave-hosted signed URL from
GET /api/v1/mlmodels/{uuid}/weights— used for checkpoints we have uploaded to our private GCS bucket (e.g. internally trained or mirrored models). Authenticated, served from infrastructure we control. - Upstream weights URL from the catalog entry (
download_url/metadata.upstream_weights_url) — used for community checkpoints we did not mirror.
The first source that yields a checksum-verified file wins. The sidecar records
downloaded_from(artifact_url/download_url/prestaged),source_url(the public URL we fetched, ornullfor artifact downloads — the signed URL expires in minutes and is useless to persist), andupstream_url(provenance). - Cyberwave-hosted signed URL from
-
Fail-soft. If every download attempt fails and the cached file is intact, Edge Core returns the cached path with a warning. This keeps workers running across transient network failures and on permanently air-gapped sites. If the cache is empty or corrupt,
RuntimeErroris raised.
Cache integrity: SHA-256 checksums are verified on every ensure_model call when a checksum is recorded — on cold start, after every download, during disk reconciliation, and on every warm-cache hit. There is no shortcut. For multi-gigabyte checkpoints this is the dominant cost of the call, but ensure_model only runs when a worker file changes (rare) or the worker container restarts; in exchange we get unconditional bit-rot detection. A checksum mismatch on a downloaded artifact triggers a re-download attempt; a download whose checksum does not match the catalog is rejected and the partial file removed.
Pre-staging weights for air-gapped deployments
On a site without internet access, an operator can place weights directly into the cache:
mkdir -p ~/.cyberwave/models/yolov8n
cp /usb-stick/yolov8n.pt ~/.cyberwave/models/yolov8n/
Edge Core picks up the file on the next ensure_model("yolov8n") call, computes a SHA-256, and writes a sidecar metadata.json so subsequent runs are deterministic. The runtime field is inferred from the file extension (.pt → ultralytics, .onnx → onnxruntime, .engine/.trt → tensorrt, .tflite → tflite, .pth → torch, .xml → opencv); provide a hand-written metadata.json to override.
Updating in place. Operators can drop a new build into the same directory and Edge Core will detect the change on the next call:
cp /usb-stick/yolov8n-v2.pt ~/.cyberwave/models/yolov8n/yolov8n.pt
The mismatch between the on-disk SHA-256 and the manifest checksum triggers a re-stamp (not a re-download), provided the sidecar still records downloaded_from: prestaged. This keeps offline edges functional across model upgrades. Files that were previously downloaded by Edge Core keep the corruption-detection semantics — bit-rot still triggers a re-download attempt rather than being silently accepted.
Pre-staged files are never auto-overwritten by catalog updates. Once a file lives under cache_dir/{model_id}/ with a downloaded_from: prestaged sidecar, Edge Core treats it as the source of truth and skips the catalog probe entirely. To force a re-download from the Cyberwave catalog, evict the model:
rm -rf ~/.cyberwave/models/yolov8n
Provide a hand-written metadata.json (with filename, checksum_sha256, and runtime) when there are multiple weight files in the directory or when corruption detection should compare against a known-good hash.
Worker container
Edge Core manages one ML worker container per edge device (container name: cyberwave-worker-{env_uuid[:8]}). The worker container runs Python worker scripts from the local workers directory and has access to cached model weights.
Worker directory layout
Place worker scripts in {config_dir}/workers/ (default: ~/.cyberwave/workers/):
~/.cyberwave/
├── workers/
│ ├── detect_people.py # Custom worker
│ └── cyberwave.yml # Optional: list model requirements
└── models/ # Auto-managed model cache
├── manifest.json
└── yolov8n/
└── yolov8n.pt
cyberwave.yml
Optionally declare model requirements so Edge Core can pre-download them before starting the worker container:
models:
- yolov8n
- background-subtraction
Edge Core also auto-detects models by scanning cw.models.load("...") calls in worker Python files.
Worker container environment variables
| Variable | Value |
|---|---|
CYBERWAVE_API_KEY |
Injected from credentials |
CYBERWAVE_ENVIRONMENT_UUID |
Active environment UUID |
CYBERWAVE_TWIN_UUIDS |
Comma-separated twin UUIDs in environment |
CYBERWAVE_DATA_BACKEND |
zenoh |
ZENOH_CONNECT |
Set when a Zenoh router is configured |
ZENOH_SHARED_MEMORY |
false by default (opt-in; requires --ipc=host) |
File watching and hot-reload
Edge Core monitors {config_dir}/workers/ every reconcile cycle (~15 seconds). When .py files are added, removed, or modified, Edge Core automatically:
- Re-scans model requirements.
- Pre-downloads any missing models.
- Restarts the worker container with the updated files.
A minimum cool-down of 10 seconds between successive automatic restarts prevents rapid churn when files are written incrementally (e.g. by rsync or scp).
Workflow-driven worker lifecycle
The worker container is brought up and torn down based on whether any active workflows are currently synced for the connected twins:
- Startup: after pulling worker files from the backend (step 8), Edge Core inspects
{config_dir}/workers/. If at least onewf_*.pyfile is present, the worker container is started; otherwise it is left down — and thecyberwaveos/edge-ml-workerimage is not pulled. - Periodic reconcile: every ~5 minutes (configurable via
CYBERWAVE_WORKER_SYNC_INTERVAL_LOOPS), Edge Core resyncs worker files from the backend. If a workflow was activated mid-run and new files appeared, the worker container is started. If every workflow was deactivated and the directory is now empty, the container is stopped. Both calls are idempotent. - Immediate reconcile on activate: when a
run_on_edgeworkflow is activated (UI, CLI, or API), the backend publishes async_workflowscommand oncyberwave/twin/{twin_uuid}/commandfor each twin the workflow references. Edge Core runsreconcile_worker_syncright away so the newwf_*.pylands within seconds instead of up to one periodic interval. Failures fall back to the periodic reconcile; the MQTT nudge is best-effort, not a correctness guarantee. - Sync errors: if a sync cycle reports any errors, the lifecycle reconcile is skipped to avoid churning a healthy worker on transient API failures. The next successful sync re-evaluates state.
Worker image refresh policy
WorkerManager._ensure_image_pulled decides whether to issue docker pull before each worker (re)start:
Tag basename (with optional -gpu/-cpu/-arch suffix) |
Mutability | Pull behaviour |
|---|---|---|
latest, dev, local, staging, nightly, edge, main, master |
Mutable | Pull every time, even when the image is already present locally. If the registry is unreachable but a local copy exists, fall back to the local copy and warn. |
Anything else (v1.2.3, dated build IDs, @sha256:…) |
Immutable | Skip the pull when the image is already present locally; only pull when missing. |
This avoids the previous failure mode where a stale cyberwaveos/edge-ml-worker:dev-gpu image stayed cached after a developer pushed a new build — operators no longer need to remember to docker rmi before restarting the worker. Immutable tags keep the original fast-path so versioned production deployments are not slowed down by an extra round-trip to the registry on every restart.
Pinning a custom worker image (CYBERWAVE_WORKER_IMAGE)
resolve_worker_image() consults CYBERWAVE_WORKER_IMAGE before falling back to the CYBERWAVE_ENVIRONMENT-derived tag. This is the worker-side counterpart to the driver_overrides field in credentials.json for camera/asset drivers, and is the recommended way to run a locally-built or hot-patched worker image without re-pushing to the registry.
Two ways to set it (both honoured by get_runtime_env_var):
-
Operator config (preferred for dev hosts, no
sudo) — add to theenvsblock of~/.cyberwave/credentials.json:{ "envs": { "CYBERWAVE_ENVIRONMENT": "dev", "CYBERWAVE_WORKER_IMAGE": "cyberwaveos/edge-ml-worker:local" } }
-
Systemd dropin (preferred for managed deployments) —
sudo systemctl edit cyberwave-edge-core:[Service] Environment=CYBERWAVE_WORKER_IMAGE=cyberwaveos/edge-ml-worker:local
Either way, restart edge-core (sudo systemctl restart cyberwave-edge-core) to reload the resolver.
Use the :local tag (no -gpu/-cpu suffix) — same convention as the camera-driver :local tag. _run_container auto-appends -gpu for cyberwaveos/edge-ml-worker:* overrides on GPU hosts, so the operator only commits and pins one tag.
Typical hot-fix loop (mirroring the camera-driver :local tag pattern):
# 1. Hot-patch the running container (e.g. swap an SDK runtime file).
WORKER=cyberwave-worker-<fingerprint>
docker cp /path/to/patched/file.py "$WORKER:/usr/local/lib/python3.12/dist-packages/cyberwave/.../file.py"
docker exec -u root "$WORKER" rm /usr/local/.../file.cpython-312-x86_64-linux-gnu.so
# 2. Snapshot the patched container as a local-only tag (and an alias on the
# GPU-suffixed tag, so the same image is used regardless of which path
# `_run_container` resolves to on this host).
docker commit "$WORKER" cyberwaveos/edge-ml-worker:local
docker tag cyberwaveos/edge-ml-worker:local cyberwaveos/edge-ml-worker:local-gpu
# 3. Tell edge-core to use it (see the two options above), then restart it.
# Pull will fail (registry has no :local) and `_ensure_image_pulled`
# falls back to the locally-present image.
sudo systemctl restart cyberwave-edge-core
The patched image survives every docker rm/docker run cycle from edge-core's reconcile loop. To revert, remove the env var (or systemctl revert cyberwave-edge-core if you used the dropin) and docker rmi cyberwaveos/edge-ml-worker:local cyberwaveos/edge-ml-worker:local-gpu.
When overriding to a registry outside cyberwaveos/edge-ml-worker:* (e.g. an internal mirror), include the -gpu suffix yourself if you need GPU access — the auto-suffix path in _run_container only triggers for the canonical cyberwaveos/edge-ml-worker: prefix.
Worker health monitoring
Edge Core continuously monitors the worker container for spontaneous exits and crash loops:
- Restart accounting: every restart is recorded with a timestamp and reason.
- Sliding-window rate limiting: if more than 5 restarts occur within 5 minutes, the circuit-breaker trips and automatic restarts are suppressed. The breaker resets automatically once the window clears.
- Spontaneous exit detection: if the container exits without a deliberate restart, a warning is logged so operators can investigate.
Use cyberwave-edge-core worker health to inspect the full restart history and circuit-breaker state.
Resource limits
You can constrain the worker container's CPU and memory usage by setting CYBERWAVE_WORKER_CPU_QUOTA_PERCENT and CYBERWAVE_WORKER_MEMORY_MB environment variables on the edge host (both optional). When set, Edge Core passes the corresponding --cpu-quota, --cpu-period, and --memory flags to docker run.
GPU memory fraction can be limited via CYBERWAVE_GPU_MEM_FRACTION (a float between 0 and 1); this is passed as an env var into the worker container.
GPU support
When an NVIDIA container runtime is detected (docker info reports nvidia runtime), Edge Core adds --gpus all to the worker container's docker run command.
Driver GPU passthrough is opt-in via asset metadata. When a driver's config includes "prefer_gpu": true and the host has:
- NVIDIA container runtime available (
docker inforeportsnvidia), and nvidiaset as the default runtime in/etc/docker/daemon.json
…Edge Core passes --gpus to the driver container. The optional "gpu" field controls which GPUs are exposed:
gpu value |
Docker flag | Use case |
|---|---|---|
| (not set) | --gpus all |
All available GPUs (default) |
1 |
--gpus 1 |
Limit to 1 GPU |
"device=0,2" |
--gpus "device=0,2" |
Specific GPU devices |
Example driver metadata:
{
"drivers": {
"default": {
"docker_image": "cyberwaveos/go2-ros2-driver:humble",
"prefer_gpu": true,
"gpu": "all"
}
}
}
If the NVIDIA runtime is available but not configured as the default in daemon.json, Edge Core logs an informational message with setup instructions instead of silently skipping GPU passthrough.
Jetson detection
Edge Core auto-detects NVIDIA Jetson hardware via /etc/nv_tegra_release. When running on a Jetson:
- The platform key
linux-aarch64-jetsonis added to the driver resolution order, allowing asset metadata to specify a Jetson-optimised image. - If no
linux-aarch64-jetsondriver key exists in metadata, Edge Core rewrites the image tag by prependingjetson-(e.g.cyberwaveos/go2-ros2-driver:humble→cyberwaveos/go2-ros2-driver:jetson-humble). If the Jetson-prefixed image is not available, it falls back to the original tag automatically.
Override detection with CYBERWAVE_PLATFORM_VARIANT=jetson for testing.
Multi-camera orchestration
When multiple cameras are connected to the same edge device (each represented as a separate digital twin), Edge Core orchestrates them as follows:
- One driver per camera: Each camera twin gets its own
cyberwave-driver-{uuid[:8]}container. Child camera twins that are attached to a parent twin share the parent's driver instead. - One shared worker: A single
cyberwave-worker-{env[:8]}container receives frames from all cameras. The worker container receivesCYBERWAVE_TWIN_UUIDSas a comma-separated list of all linked twins. - Readiness probes: Edge Core waits for all driver containers to reach a
runningstate before starting the worker. If some drivers fail, the worker starts anyway so healthy cameras can be utilized. - Model pre-download: Before the worker starts, Edge Core scans worker scripts and pre-downloads all referenced ML models.
- Driver health monitoring: If a driver goes down while the worker is running, Edge Core sends an alert to the affected twin.
Use cyberwave edge status to see all driver and worker containers with their twin mappings.
Multi-container drivers
Some robots require multiple cooperating containers (e.g. a driver, bridge nodes, Nav2, SLAM, and elevation mapping). Edge Core supports this via an optional services array in the driver metadata. When present, Edge Core launches one container per service instead of a single driver container.
Metadata schema
{
"drivers": {
"linux-aarch64-jetson": {
"services": [
{
"image": "cyberwaveos/go2-ros2-driver:jetson-humble",
"name": "driver",
"command": ["ros2", "launch", "cyberwave_go2_driver", "robot_driver.launch.py"]
},
{
"image": "cyberwaveos/go2-ros2-driver:jetson-humble",
"name": "bridges",
"command": ["ros2", "launch", "cyberwave_go2_driver", "robot_bridges.launch.py"]
},
{
"image": "cyberwaveos/ros2-nav2:jetson-humble",
"name": "nav2"
},
{
"image": "cyberwaveos/ros2-slam:jetson-humble",
"name": "slam"
},
{
"image": "cyberwaveos/ros2-elevation-mapping:jetson-humble",
"name": "elevation",
"prefer_gpu": true
}
],
"shared_env": {
"CONFIG_PROFILE": "jetson",
"ROS_DOMAIN_ID": "0",
"CYBERWAVE_MAP_DIR": "/data"
},
"shared_params": ["--network", "host", "-v", "/data:/data"]
},
"default": {
"docker_image": "cyberwaveos/go2-ros2-driver",
"prefer_gpu": true
}
}
}
How it works
servicespresent → multi-container mode (one container per service entry).docker_imagepresent, noservices→ single-container mode (existing behavior, unchanged).- The
defaultfallback key works as before for platforms that don't match a specific key.
Per-service fields
| Field | Required | Description |
|---|---|---|
image |
Yes | Docker image reference |
name |
Yes | Service name (used in container naming) |
command |
No | Override the container entrypoint command |
env |
No | Per-service environment variables |
params |
No | Per-service Docker params |
prefer_gpu |
No | Enable GPU passthrough for this service |
gpu |
No | GPU device selector (default: all) |
Shared configuration
shared_env: Environment variables applied to every service. Per-serviceenvoverrides shared values.shared_params: Docker params applied to every service (e.g.--network host, volume mounts).
Environment layering
- Edge Core base env (
CYBERWAVE_API_KEY, MQTT, Zenoh, etc.) — existing, unchanged shared_envfrom metadata- Per-service
envfrom metadata
Container naming
- Single-container mode (unchanged):
cyberwave-driver-{twin_uuid[:8]} - Multi-container mode:
cyberwave-driver-{twin_uuid[:8]}-{service_name}
Backward compatibility
The existing single-image contract is fully preserved. Metadata without a services key follows the original code path with zero changes. All existing tests continue to pass unmodified.
Writing compatible drivers
A Cyberwave driver is a Docker image that interacts with device hardware and the Cyberwave backend. When Edge Core starts a driver container it sets the following environment variables (provided to the container):
CYBERWAVE_TWIN_UUIDCYBERWAVE_API_KEYCYBERWAVE_TWIN_JSON_FILE(writable file path)CYBERWAVE_CHILD_TWIN_UUIDS(optional, comma-separated)CYBERWAVE_DATA_BACKEND— data transport backend (zenohby default)ZENOH_SHARED_MEMORY—true/false; enables zero-copy Zenoh SHM transportZENOH_CONNECT— (optional) comma-separated Zenoh router endpoint URLs
CYBERWAVE_CHILD_TWIN_UUIDS is present when child camera twins are attached to the driver twin; drivers can use this to coordinate cameras without additional prompts.
Zenoh data bus
Edge Core automatically injects Zenoh transport configuration into every driver container so that drivers using cw.data.publish() work without any extra configuration. The data-bus variables are:
| Variable | Default | Description |
|---|---|---|
CYBERWAVE_DATA_BACKEND |
zenoh |
Data transport: zenoh or filesystem |
ZENOH_SHARED_MEMORY |
false |
Opt-in zero-copy shared-memory transport. Requires --ipc=host between containers; leave disabled unless your runtime is configured for it. |
ZENOH_CONNECT |
(empty) | Router endpoints, e.g. tcp/10.0.0.1:7447 |
All variables can be overridden per-driver with -e KEY=VALUE in driver params.
Peer-to-peer mode (default): when ZENOH_CONNECT is empty, Zenoh uses multicast discovery. On Linux with --network host (the default), all driver containers on the same machine discover each other automatically.
Router mode (optional): set ZENOH_ROUTER_ENABLED=true to have Edge Core start an eclipse/zenoh:latest router container before the driver containers. This is required for MQTT bridge or multi-hop topologies.
Environment variables for Zenoh infrastructure:
| Variable | Default | Description |
|---|---|---|
ZENOH_ROUTER_ENABLED |
false |
Start a Zenoh router container before drivers |
ZENOH_ROUTER_IMAGE |
eclipse/zenoh:latest |
Docker image for the router |
ZENOH_ROUTER_PORT |
7447 |
Host port for the router |
ZENOH_SHARED_MEMORY |
false |
Opt-in shared-memory transport. Requires all Cyberwave containers to share an IPC namespace (--ipc=host); leave disabled unless validated end-to-end. |
Driver failure handling
Drivers must exit with a non-zero code when they cannot access required hardware (for example, missing /dev/video* or disconnected peripherals). This allows Edge Core to detect startup failures and trigger restart logic.
Edge Core alerts and behavior:
driver_start_failure: raised if a driver container cannot reach a stable running state.driver_restart_loop: raised when a driver restarts more than the configured threshold (default 4 restarts within 60 seconds). The container is stopped and marked as flapping.
Optional environment variables to tune restart behavior:
CYBERWAVE_DRIVER_RESTART_LOOP_THRESHOLD(default:4)CYBERWAVE_DRIVER_RESTART_LOOP_WINDOW_SECONDS(default:60)CYBERWAVE_DRIVER_TROUBLESHOOTING_URL(default:https://docs.cyberwave.com)
Driver revival and orphan containers
When a managed driver exits cleanly (Docker's --restart unless-stopped policy does not auto-revive clean exits), Edge Core's revival reconciler re-runs driver startup so the missing container is recreated. Revival is restricted to driver containers this Edge Core process is currently managing — i.e. whose twin is still linked to this edge's fingerprint.
Stopped cyberwave-driver-* containers belonging to twins that have since been unlinked are treated as orphans and ignored by revival. They remain on the host harmlessly until the user removes them with docker rm or docker container prune. Without this guard, an orphan would re-trigger driver startup every revival cycle and force-recreate the currently healthy drivers as a side effect of the idempotent docker rm -f step.
Twin JSON file
CYBERWAVE_TWIN_JSON_FILE is an absolute path to a JSON file provided to the driver. The file contains the digital twin instance object (including its metadata) and the associated catalog twin data, matching the API schema: TwinSchema and AssetSchema.
Drivers may modify this file; Edge Core will sync changes back to the backend when connectivity is available.
Bidirectional twin sync
reconcile_twin_json_file_sync() runs on every reconcile cycle (~15 s) and now operates in both directions:
- Push (legacy): a local file whose checksum changed since the last cycle is pushed to
PUT /api/v1/twins/{uuid}. The set of fields the edge is allowed to push is constrained by_TWIN_UPDATE_ALLOWED_FIELDS(noasset_uuid,environment_uuid, etc.). - Pull (new): for every tracked twin file that did not change locally this cycle, the latest twin is fetched via
client.twins.get_raw(uuid)and the fields in_TWIN_PULL_ALLOWED_FIELDS(currently justmetadata) are merged into the local file.
The pull leg closes the gap that previously forced an edge-core restart for UI-driven metadata edits (e.g. flipping the privacy frame filter on/off in the sensor settings dialog) to reach the driver container's environment via entrypoint.sh. Push wins for the cycle in which the local file changed; the next cycle's pull surfaces any concurrent backend edits.
The pull set is intentionally narrow: any field the edge legitimately writes locally must not be added to _TWIN_PULL_ALLOWED_FIELDS, otherwise the next cycle would silently clobber the local edit.
Twin metadata
Use the official Cyberwave SDK to interact with the API and MQTT; it abstracts authentication, retries, and handshake logic.
Register a driver by adding its configuration to a twin's metadata (or the catalog twin's metadata if you control the catalog twin). Use the environment view's Advanced editing to edit metadata.
Note: changing a catalog twin's metadata affects all subsequently created digital twins derived from that catalog twin.
Example driver metadata (JSON):
{
"drivers": {
"default": {
"docker_image": "cyberwaveos/so101-driver",
"version": "0.0.1",
"params": [
"--network",
"local",
"--add-host",
"host.docker.internal:host-gateway"
]
}
}
}
Platform-specific driver selection
Edge Core can select platform-specific driver entries before falling back to
default.
Selection order:
- Child-registry-specific entry (existing behavior)
- Host platform/machine keys (for example
darwin-arm64,darwin,macos,mac) default
Example:
{
"drivers": {
"default": {
"docker_image": "cyberwaveos/so101-driver"
},
"darwin-arm64": {
"docker_image": "cyberwaveos/so101-driver:macos",
"params": ["-e", "CYBERWAVE_SERIAL_BRIDGE_URL=tcp://host.docker.internal:22001"]
}
}
}
macOS host-device bridge hook
On macOS, Linux --device mappings in params cannot directly expose host
hardware to Linux containers. Edge Core now supports a pre-run native bridge
hook:
- Set
CYBERWAVE_MACOS_DEVICE_BRIDGE_COMMANDon the host - Edge Core executes it once per
--devicemapping beforedocker run - Template variables available:
{host_device}{container_device}{twin_uuid}{container_name}{config_dir}
Example:
export CYBERWAVE_MACOS_DEVICE_BRIDGE_COMMAND="cyberwave-edge-hw-bridge --device {host_device} --target {container_device} --twin {twin_uuid}"
The command can start native camera/serial forwarding services that expose
bridge endpoints to the container (typically via host.docker.internal).
Bridge command stdout can optionally return a resolved source for the mapped device:
- JSON:
{"resolved_device":"rtsp://host.docker.internal:8554/cam0"} - or line format:
resolved_device=rtsp://host.docker.internal:8554/cam0
When this value differs from /dev/video*, Edge Core can transparently:
- inject
CYBERWAVE_METADATA_VIDEO_DEVICEfor the driver - inject
CYBERWAVE_EDGE_VIDEO_DEVICE_MAP(JSON map of Linux device to resolved source) - remove Linux-only
--device /dev/video*flags beforedocker runon macOS (default enabled withCYBERWAVE_MACOS_STRIP_VIDEO_DEVICE_PARAMS=true)
This lets Linux-style drivers keep their normal auto-setup logic while receiving a macOS-compatible video source without driver code changes.
For camera twins, Edge Core can also provide default bridge candidates on macOS
even when metadata has no explicit --device params (default driver config),
so Linux-oriented camera drivers remain compatible with minimal metadata.
To inject environment variables into a driver container, list -e flags inside params: each -e must be a separate element followed by its KEY=value string. Example:
{
"drivers": {
"default": {
"docker_image": "cyberwaveos/go2-native-driver",
"params": ["-e", "MY_VAR=value", "-e", "ANOTHER_VAR=value2"]
}
}
}
Each -e must be its own element in the array, followed by the KEY=value string as the next element. This is equivalent to passing -e MY_VAR=value on the docker run command line.
This is useful for driver-specific configuration that varies per device, such as IP addresses, credentials, or feature flags that cannot be stored in the twin's edge_configs metadata.
Runtime configuration for drivers (metadata["edge_configs"])
Drivers and edge services should treat metadata["edge_configs"] as the source of truth for per-device runtime configuration.
Edge identity should be stored at metadata["edge_fingerprint"] (not duplicated inside edge_configs).
Runtime access: The core passes the full twin JSON (including
metadata) to every driver via theCYBERWAVE_TWIN_JSON_FILEenvironment variable. Drivers can readedge_configsfrom that file at startup to obtain per-device settings — for example, selecting the right camera source or IP address for the current machine. This is the recommended way to pass device-specific configuration to a driver without hardcoding values in the image.
- Type: object/dictionary
- Value: binding object (
object)
Canonical shape:
{
"edge_fingerprint": "macbook-pro-a1b2c3d4e5f6",
"edge_configs": {
"camera_config": {
"camera_id": "front",
"source": "rtsp://user:pass@192.168.1.20/stream",
"fps": 10,
"resolution": "VGA",
"camera_type": "cv2"
}
}
}
Field notes:
edge_fingerprint: fingerprint of the edge serving this twin (recommended).camera_config: per-device camera/runtime config consumed by drivers.
Avoid storing transient runtime state such as edge_uuid, registered_at, last_sync, last_ip_address, or status_data inside edge_configs.
Backward compatibility:
- Older records may use a legacy map shape (
edge_configs[fingerprint] = {...}). - Older records may store camera settings in
cameras[0]or as top-level fields. - New writers should prefer
camera_configunderedge_configs. - Do not rely on
PUT /api/v1/edges/{uuid}/twins/{twin_uuid}/camera-config; it is deprecated. Update twin metadata instead.
Advanced usage
Manual install and troubleshooting
# Install the Buildkite package signing key
curl -fsSL "https://packages.buildkite.com/cyberwave/cyberwave-edge-core/gpgkey" | gpg --dearmor -o /etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg
# Configure the Apt source
echo -e "deb [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-edge-core/any/ any main\ndeb-src [signed-by=/etc/apt/keyrings/cyberwave_cyberwave-edge-core-archive-keyring.gpg] https://packages.buildkite.com/cyberwave/cyberwave-edge-core/any/ any main" \
> /etc/apt/sources.list.d/buildkite-cyberwave-cyberwave-edge-core.list
# Run Edge Core (performs startup checks and starts drivers + worker container)
cyberwave-edge-core
# Show status, credentials and MQTT connectivity (read-only)
cyberwave-edge-core status
# Show version
cyberwave-edge-core --version
# Worker container management (also available via `cyberwave worker …`)
cyberwave-edge-core worker start # Start the worker container
cyberwave-edge-core worker stop # Stop the worker container
cyberwave-edge-core worker restart # Restart the worker container
cyberwave-edge-core worker status # Show container state, workers, cached models, and health
cyberwave-edge-core worker health # Show detailed restart history and circuit-breaker state
cyberwave-edge-core worker logs # Stream worker container logs
cyberwave-edge-core worker logs --no-follow # Print recent logs without following
Preview builds from dev / staging CI are published as separate Debian packages in the same apt repo: cyberwave-edge-core-dev and cyberwave-edge-core-staging. apt install cyberwave-edge-core only pulls tagged releases; use one of the channel packages explicitly when you want those binaries (the packages conflict because they ship the same /usr/bin/cyberwave-edge-core).
On non-apt platforms, prerelease Python wheels are published to the Buildkite Python registry and consumed automatically by cyberwave edge install --channel dev|staging. Stable pip installs continue to use the public PyPI release.
Environment variables
Run against a different environment/base URL:
export CYBERWAVE_ENVIRONMENT="yourenv"
export CYBERWAVE_BASE_URL="https://yourbaseurl"
cyberwave-edge-core
Control log verbosity (default: INFO):
export CYBERWAVE_EDGE_LOG_LEVEL="DEBUG"
cyberwave-edge-core
Or pass env vars to the CLI installer:
sudo CYBERWAVE_ENVIRONMENT="yourenv" CYBERWAVE_BASE_URL="https://yourbaseurl" CYBERWAVE_MQTT_HOST="yourmqtt" cyberwave edge install
Local development (from this folder)
You can develop both the Cyberwave CLI and Edge Core from the cyberwave-edge-core directory using a single virtual environment that has the monorepo SDK, CLI, and edge-core installed in editable mode.
One-time setup
From cyberwave-edge-core/:
# Create and activate a venv (e.g. .venv in this folder)
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install SDK, CLI, and Edge Core in editable mode (order matters: SDK first)
pip install -e ../cyberwave-sdks/cyberwave-python
pip install -e ../cyberwave-clis/cyberwave-python-cli/"[build]"
pip install -e ".[build]"
Generate the SDK REST client (required for editable SDK). The SDK’s cyberwave.rest package is generated from the backend OpenAPI spec and is not committed. If you see ImportError: cannot import name 'DefaultApi' from 'cyberwave.rest':
- Start the backend:
cd ../cyberwave-backend && docker compose -f local.yml up -d(wait until healthy). - From the repo root, generate the REST client:
cd cyberwave-sdks && ./python-sdk-gen.sh sdk --host localhost:8000
- Re-run the
pip install -esteps above if you already installed; the editable SDK will then include the generatedcyberwave/restcode.
Run CLI and Edge Core
After activating the venv, both commands are on your PATH:
# CLI
cyberwave --help
cyberwave login --email boss@cyberwave.com --password iamnottheboss
cyberwave edge install --help
# Edge Core
cyberwave-edge-core --help
cyberwave-edge-core status
cyberwave-edge-core
Target backend: If you do not set CYBERWAVE_BASE_URL, the CLI and Edge Core use the default production API (https://api.cyberwave.com). To use your local backend instead:
export CYBERWAVE_BASE_URL=http://localhost:8000
export CYBERWAVE_MQTT_HOST=localhost
export CYBERWAVE_ENVIRONMENT=local
Paths from this folder
| What | Path (from cyberwave-edge-core/) |
|---|---|
| Repo root | .. |
| Python SDK | ../cyberwave-sdks/cyberwave-python |
| CLI | ../cyberwave-clis/cyberwave-python-cli |
Edit code in any of those directories; the editable installs pick up changes (no reinstall needed for Python changes).
Contributing
Contributions are welcome. Please open an issue to discuss bugs or feature requests, and submit a pull request when you are ready.
Community and Documentation
- Documentation: https://docs.cyberwave.com
- Community (Discord): https://discord.gg/dfGhNrawyF
- Issues: https://github.com/cyberwave-os/cyberwave-edge-core/issues
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cyberwave_edge_core-0.1.4.tar.gz.
File metadata
- Download URL: cyberwave_edge_core-0.1.4.tar.gz
- Upload date:
- Size: 189.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5505e376f8875db421dd60c36a1fcd4ab7ecbf80b8f221528fb5b5d87f18fc91
|
|
| MD5 |
391f7be1e4c7a89f7b3bcd346522fcdf
|
|
| BLAKE2b-256 |
384a68dd68336324adcb8fb83836a177c1fc2c81a398051ef838c67bbdfc69f5
|
Provenance
The following attestation bundles were made for cyberwave_edge_core-0.1.4.tar.gz:
Publisher:
release-pypi.yml on cyberwave-os/cyberwave-edge-core
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cyberwave_edge_core-0.1.4.tar.gz -
Subject digest:
5505e376f8875db421dd60c36a1fcd4ab7ecbf80b8f221528fb5b5d87f18fc91 - Sigstore transparency entry: 1459572319
- Sigstore integration time:
-
Permalink:
cyberwave-os/cyberwave-edge-core@fc7f45ad33ee24121fff06a0d111be99ed08134d -
Branch / Tag:
refs/tags/v0.1.4 - Owner: https://github.com/cyberwave-os
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release-pypi.yml@fc7f45ad33ee24121fff06a0d111be99ed08134d -
Trigger Event:
push
-
Statement type:
File details
Details for the file cyberwave_edge_core-0.1.4-py3-none-any.whl.
File metadata
- Download URL: cyberwave_edge_core-0.1.4-py3-none-any.whl
- Upload date:
- Size: 117.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
304c7d1688417d1fb8689e12ddaec8591b403c44ec8107a97d542eab0e11daa9
|
|
| MD5 |
002c866702e3a1ae478477cd431c6b0f
|
|
| BLAKE2b-256 |
8baeeefe2e185e68c3fe10231a3277cddde8a54a5f18ee31bbed327d2f50a335
|
Provenance
The following attestation bundles were made for cyberwave_edge_core-0.1.4-py3-none-any.whl:
Publisher:
release-pypi.yml on cyberwave-os/cyberwave-edge-core
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cyberwave_edge_core-0.1.4-py3-none-any.whl -
Subject digest:
304c7d1688417d1fb8689e12ddaec8591b403c44ec8107a97d542eab0e11daa9 - Sigstore transparency entry: 1459572398
- Sigstore integration time:
-
Permalink:
cyberwave-os/cyberwave-edge-core@fc7f45ad33ee24121fff06a0d111be99ed08134d -
Branch / Tag:
refs/tags/v0.1.4 - Owner: https://github.com/cyberwave-os
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release-pypi.yml@fc7f45ad33ee24121fff06a0d111be99ed08134d -
Trigger Event:
push
-
Statement type: