Advantech Board Support Package (BSP) Registry Manager
Project description
bsp-registry-tools
Python tools to build, fetch, and work with Yocto-based BSPs using the KAS build system.
Overview
bsp-registry-tools provides a command-line interface and Python API for managing Advantech Board Support Packages (BSPs). It uses YAML-based registry files to define BSP configurations, build environments, and Docker containers, making reproducible Yocto builds straightforward.
Key Features
- ๐ BSP registry management via YAML configuration files
- ๐ Automatic remote registry fetching โ clone/update a remote registry with no manual setup
- ๐ Named remote management โ
bsp remotes add/remove/rename/set-url/showfor persistent, git-style remote configuration - ๐ณ Docker container support for reproducible build environments
- ๐ง KAS integration for Yocto-based builds (
kas,kas-container) - ๐ฅ๏ธ Interactive shell access to build environments
- ๐ Environment variable expansion (
$ENV{VAR}syntax) - ๐ค Configuration export for sharing and archiving build configs
- โ Comprehensive validation of configurations before building
- ๐ Registry splitting โ compose a registry from multiple files using the
includedirective - ๐ HTTP server mode โ expose the full BSP registry via REST and GraphQL APIs
- โ๏ธ Cloud artifact deployment โ upload Yocto build artifacts to Azure Blob Storage or AWS S3 with
bsp deploy - โฌ๏ธ Cloud artifact gathering โ download previously uploaded artifacts from Azure Blob Storage or AWS S3 with
bsp gather - ๐งช HIL test triggering โ submit LAVA test jobs with Robot Framework suites after a build
- ๐ค Shell tab completions โ Bash/Zsh/Fish/tcsh completions for commands, presets, devices, releases, and features
Installation
From PyPI
pip install bsp-registry-tools
To also install the optional HTTP server dependencies:
pip install "bsp-registry-tools[server]"
From Source
git clone https://github.com/Advantech-EECC/bsp-registry-tools.git
cd bsp-registry-tools
pip install .
# With server extras:
pip install ".[server]"
Dependencies
- Python 3.8+
- PyYAML >= 6.0
- dacite >= 1.6.0
- kas >= 4.7
- colorama >= 0.4.6
- requests >= 2.28.0 (for LAVA HIL test integration)
- Jinja2 >= 3.1.0 (for LAVA job template rendering)
Optional โ server mode (pip install bsp-registry-tools[server]):
- FastAPI >= 0.100.0
- uvicorn >= 0.23.0
- strawberry-graphql >= 0.200.0
Optional extras for cloud deployment
Cloud SDK dependencies are optional and only needed if you use bsp deploy:
# Azure Blob Storage support
pip install "bsp-registry-tools[azure]"
# AWS S3 support
pip install "bsp-registry-tools[aws]"
# Both providers
pip install "bsp-registry-tools[deploy]"
Optional extras for shell completions
Tab-completion support is optional and requires argcomplete:
pip install "bsp-registry-tools[completions]"
See the Shell Completions section below for activation instructions.
Shell Completions
bsp supports tab completions for Bash, Zsh, Fish, and tcsh via
argcomplete. Completions
dynamically query the active registry so that preset names, device slugs,
release slugs, feature slugs, and remote names are all available.
1. Install the completions extra
pip install "bsp-registry-tools[completions]"
2. Activate completions for your shell
Use the bsp completions sub-command to print the shell-specific activation
snippet, then source it:
# Bash โ add to ~/.bashrc
eval "$(bsp completions bash)"
# Zsh โ add to ~/.zshrc
eval "$(bsp completions zsh)"
# Fish โ add to ~/.config/fish/config.fish
bsp completions fish | source
# tcsh โ add to ~/.tcshrc
eval `bsp completions tcsh`
bsp completions without an argument auto-detects the shell from $SHELL.
3. (Alternative) Global activation
If you want completions for all argcomplete-enabled tools at once, use the helper provided by argcomplete itself:
activate-global-python-argcomplete
This installs a single shell hook that covers every tool that calls
argcomplete.autocomplete().
Quick Start
Zero-Config Usage (Remote Registry)
If you have no local registry file, bsp automatically clones the default
Advantech BSP registry into
~/.cache/bsp/registry and keeps it up-to-date on every run:
# First run: clones the registry, then lists BSPs
bsp list
# Subsequent runs: pulls latest changes, then lists BSPs
bsp list
# Skip the network update (useful offline or in CI)
bsp --no-update list
# Use a different remote or branch (one-off override)
bsp --remote https://github.com/my-org/bsp-registry.git --branch dev list
Persistent Named Remotes
For a more permanent setup, register one or more named remotes (similar to
git remote). Once added, these are used automatically whenever bsp falls
back to remote registry fetching โ no --remote flag required:
# Register a named remote
bsp remotes add myorg https://github.com/my-org/bsp-registry.git --branch dev
# List configured remotes
bsp remotes
# Show full details
bsp remotes show myorg
# Now use it โ the stored remote is picked up automatically
bsp list
bsp build my-preset
# With multiple remotes configured, list/tree show all remotes annotated with [remote-name]
bsp list
bsp tree
# Scope listing to a single named remote
bsp list --remote myorg
bsp tree --remote myorg
When multiple remotes are registered, bsp list and bsp tree display
entries from all of them, each annotated with [remote-name]. Registries are
kept strictly separate โ definitions from different remotes are never merged.
Use --remote NAME with list or tree to restrict output to a single named
remote.
Manual Registry Usage
1. Create a BSP Registry File
Create a bsp-registry.yaml or bsp-registry.yml file (see examples/bsp-registry.yaml):
specification:
version: "2.0"
environment:
variables:
- name: "GITCONFIG_FILE"
value: "$ENV{HOME}/.gitconfig"
# Named environment: container + variables used for all builds by default
environments:
default:
container: "debian-bookworm"
variables:
- name: "DL_DIR"
value: "$ENV{HOME}/yocto-cache/downloads"
- name: "SSTATE_DIR"
value: "$ENV{HOME}/yocto-cache/sstate"
containers:
debian-bookworm:
image: "bsp/registry/debian/kas:5.1"
file: Dockerfile
args:
- name: "DISTRO"
value: "debian-bookworm"
- name: "KAS_VERSION"
value: "5.1"
registry:
# frameworks and distro define the build system hierarchy (optional but recommended)
frameworks:
- slug: yocto
description: "Yocto Project build system"
vendor: "Yocto Project"
includes:
- kas/yocto/yocto.yaml
distro:
- slug: poky
description: "Poky (Yocto Project reference distro)"
framework: yocto # links distro to a framework for feature compatibility checks
includes:
- kas/yocto/distro/poky.yaml
# devices define hardware targets (KAS includes listed flat, no nested build: block)
devices:
- slug: qemuarm64
description: "QEMU ARM64 (emulated)"
vendor: qemu
soc_vendor: arm
includes:
- kas/qemu/qemuarm64.yaml
releases:
- slug: scarthgap
description: "Yocto 5.0 LTS (Scarthgap)"
distro: poky
yocto_version: "5.0"
includes:
- kas/scarthgap.yaml
# bsp presets name a device + release + features combination.
# Use "releases" (plural) to target multiple releases without repetition:
bsp:
- name: poky-qemuarm64
description: "Poky QEMU ARM64"
device: qemuarm64
releases: [scarthgap, styhead] # expands to poky-qemuarm64-scarthgap / poky-qemuarm64-styhead
features: []
build:
container: "debian-bookworm"
# Single-release entry (backward compatible):
- name: poky-qemuarm64-scarthgap-ota
description: "Poky QEMU ARM64 Scarthgap with OTA"
device: qemuarm64
release: scarthgap
features: [ota]
build:
container: "debian-bookworm"
path: build/poky-qemuarm64-scarthgap-ota
2. List Available BSPs
# With an explicit registry file
bsp --registry bsp-registry.yaml list
# Or simply if bsp-registry.yaml (or bsp-registry.yml) is in the current directory
bsp list
- poky-qemuarm64-scarthgap: Poky QEMU ARM64 Scarthgap (Yocto 5.0 LTS)
3. Build a BSP
bsp build poky-qemuarm64-scarthgap
4. Enter Interactive Shell
bsp shell poky-qemuarm64-scarthgap
5. Submit a HIL Test Job
# Submit a LAVA test job for a pre-built image and wait for results
bsp test poky-qemuarm64-scarthgap --wait
# Build and immediately trigger a LAVA test after the build succeeds
bsp build poky-qemuarm64-scarthgap --test --wait
CLI Reference
usage: bsp [-h] [--verbose] [--registry REGISTRY] [--no-color]
[--remote REMOTE] [--branch BRANCH] [--update | --no-update]
[--local]
{build,list,containers,tree,export,shell,server,deploy,gather,test,remotes} ...
Advantech Board Support Package Registry
positional arguments:
{build,list,containers,tree,export,shell,server,deploy,gather,test,remotes}
Command to execute
build Build an image for BSP
list List available BSPs and components
containers List available containers
tree Display a tree view of the BSP registry
export Export BSP configuration
shell Enter interactive shell for BSP
server Start a GraphQL / REST HTTP server
deploy Deploy build artifacts to cloud storage
gather Download BSP build artifacts from cloud storage
test Submit a LAVA HIL test job for a BSP
remotes Manage named remote BSP registry sources
options:
-h, --help show this help message and exit
--verbose, -v Verbose output
--registry REGISTRY, -r REGISTRY
BSP Registry file (local path; skips remote fetch)
--no-color Disable colored output
--remote REMOTE Remote registry git URL
(default: https://github.com/Advantech-EECC/bsp-registry.git)
--branch BRANCH Remote registry branch (default: main)
--update Update the cached registry clone before use (default)
--no-update Skip updating the cached registry clone
--local Force local registry lookup only (do not use remote)
Registry Resolution Priority
The tool determines which registry file to use in the following order:
--registry <path>โ explicit local file, remote fetch is skipped entirely.--localโ use./bsp-registry.yamlor./bsp-registry.ymlin the current directory; no network access.bsp-registry.yamlexists in the current directory โ auto-detect (preferred extension).bsp-registry.ymlexists in the current directory โ auto-detect (alternate extension).--remote URLflag(s) provided โ fetch the specified remote(s) on-the-fly (no persistence).- Named remotes configured โ if
bsp remotes addhas registered remotes in~/.config/bsp/remotes.yaml, those are fetched automatically. - Otherwise โ fall back to the default Advantech BSP registry at
~/.cache/bsp/registry.
Global Options
| Option | Description |
|---|---|
--verbose, -v |
Enable verbose/debug output |
--registry REGISTRY, -r REGISTRY |
Path to BSP registry file (local override) |
--no-color |
Disable colored output |
--remote REMOTE |
Remote registry git URL (default: Advantech BSP registry) |
--branch BRANCH |
Remote registry branch (default: main) |
--update / --no-update |
Update cached registry clone before use (default: update) |
--local |
Force local lookup; never contact remote |
Commands
list โ List available BSPs
bsp list
bsp --registry my-registry.yaml list
# Filter by component type
bsp list devices
bsp list releases
bsp list features
bsp list distros
# Filter releases to those compatible with a specific device
bsp list releases --device imx8qm
# When multiple remotes are configured, scope output to a single named remote
bsp list --remote myorg
bsp list devices --remote myorg
bsp list releases --remote myorg
When multiple remotes are loaded, every entry is annotated with [registry-name]
so the source is always visible. Registries from different remotes are kept
separate โ their definitions are never merged together.
| Option | Description |
|---|---|
--remote NAME |
Show only entries from the named remote registry |
--device DEVICE, -d DEVICE |
Filter releases by device slug (only used with releases) |
containers โ List available container definitions
bsp containers
tree โ Display a tree view of the BSP registry
bsp tree
bsp tree --full
bsp tree --compact
bsp --no-color tree
bsp --registry my-registry.yaml tree
# When multiple remotes are configured, scope the tree to a single named remote
bsp tree --remote myorg
bsp tree --full --remote myorg
Renders the full registry as a colored ASCII tree, grouped into sections:
Frameworks, Distros, Releases (with vendor overrides), Devices,
Features (with release and vendor overrides in full mode), and BSP Presets (with device, release, and feature details).
Use --no-color to disable colors (e.g. for scripts or log files).
When multiple remotes are loaded, items are grouped under [registry-name]
sub-nodes. Registries from different remotes are kept separate โ their
definitions are never merged together. Use --remote NAME to restrict the
tree to a single named remote.
| Option | Description |
|---|---|
--full |
Show full details including includes lists, release overrides and vendor overrides for features, vendor overrides for releases, and override slugs for presets |
--compact |
Show compact output with names/slugs only (no sub-items) |
--remote NAME |
Show only entries from the named remote registry |
Example output (bsp tree):
BSP Registry
โโโ Frameworks (1)
โ โโโ yocto: Yocto Project (vendor: yocto)
โโโ Distros (1)
โ โโโ poky: Poky (vendor: yocto, framework: yocto)
โโโ Releases (1)
โ โโโ scarthgap: Yocto 5.0 LTS [Yocto 5.0]
โ โโโ distro: poky
โ โโโ vendor override: advantech (sub-releases: imx-6.6.53)
โโโ Devices (2)
โ โโโ qemu-arm64: QEMU ARM64 (vendor: qemu, soc_vendor: arm)
โ โโโ imx8qm: i.MX8 QM (vendor: advantech, soc_vendor: nxp, soc_family: imx8)
โโโ Features (2)
โ โโโ ota: OTA Update
โ โโโ secure-boot: Secure Boot [requires vendor: ['advantech']]
โโโ BSP Presets (2)
โโโ qemu-arm64-scarthgap: QEMU ARM64 Scarthgap
โ โโโ device: qemu-arm64 release: scarthgap
โโโ imx8qm-scarthgap: i.MX8 QM Scarthgap
โโโ device: imx8qm release: scarthgap
โโโ vendor release: imx-6.6.53
โโโ features: ota, secure-boot
Example output (bsp tree --full):
In --full mode all includes lists are expanded, release overrides and vendor overrides for features are shown as nested sub-trees, and vendor overrides for releases are also expanded:
BSP Registry
โโโ Releases (1)
โ โโโ scarthgap: Yocto 5.0 LTS [Yocto 5.0]
โ โโโ distro: poky
โ โโโ includes: kas/poky/scarthgap.yaml
โ โโโ vendor override: advantech (distro: fsl-imx-xwayland)
โ โโโ includes: kas/yocto/vendors/advantech/scarthgap.yaml
โ โโโ vendor release: imx-6.6.53: Scarthgap for i.MX 6.6.53
โ โโโ includes:
โ โโโ kas/yocto/vendors/advantech/nxp/imx-6.6.53.yaml
โโโ Features (1)
โโโ ostree: Enable OSTree support in the Yocto image [requires compatible_with: yocto]
โโโ includes:
โ โโโ features/ota/ostree/ostree.yml
โโโ release override: scarthgap
โ โโโ includes:
โ โโโ features/ota/ostree/ostree-scarthgap.yml
โโโ release override: styhead
โ โโโ includes:
โ โโโ features/ota/ostree/ostree-styhead.yml
โโโ vendor override: advantech
โโโ soc vendor: nxp
โโโ includes: features/ota/ostree/modular-bsp-ota-nxp.yml
build โ Build a BSP image
bsp build <bsp_name> [--feature FEATURE...] [--checkout] [--target TARGET] [--task TASK] [--path PATH]
bsp build <bsp_name> [--feature FEATURE...] [--deploy] [--deploy-provider PROVIDER] [--deploy-container CONTAINER] [--deploy-prefix PREFIX]
bsp build <bsp_name> [--feature FEATURE...] [--test [--wait] [--lava-server URL] [--lava-token TOKEN] [--artifact-url URL]]
bsp build --device <device> --release <release> [--feature FEATURE...] [--checkout] [--target TARGET] [--task TASK] [--path PATH] [--test ...]
| Option | Description |
|---|---|
--feature FEATURE, -f FEATURE |
Feature slug to enable (can be repeated). When used with a preset name, extra features are merged with those already declared in the preset. |
--checkout |
Validate configuration and checkout repos without building |
--path PATH |
Override the output build directory path defined in the registry |
--target TARGET |
Bitbake build target (image or recipe) to pass to KAS, overriding any targets defined in the registry preset |
--task TASK |
Bitbake task to run (e.g. compile, configure) to pass to KAS |
--deploy |
Deploy artifacts to cloud storage after a successful build |
--deploy-provider PROVIDER |
Cloud storage provider: azure (default) or aws |
--deploy-container CONTAINER |
Azure container or AWS bucket name (overrides registry config) |
--deploy-prefix PREFIX |
Remote path prefix template (overrides registry config) |
--deploy-archive-name NAME |
Bundle artifacts into a single archive with this name before uploading (supports {device}, {release}, {distro}, {vendor}, {date}, {datetime}) |
--deploy-archive-format FORMAT |
Archive format: tar.gz (default), tar.bz2, tar.xz, zip |
--test |
Submit a LAVA HIL test job after a successful build |
--wait |
Wait for the LAVA job to complete and print results (requires --test) |
--lava-server URL |
LAVA server base URL override (overrides registry lava.server) |
--lava-token TOKEN |
LAVA API token override (overrides registry lava.token) |
--artifact-url URL |
Base URL where build artifacts are served to the LAVA lab |
Examples:
# Full build
bsp build poky-qemuarm64-scarthgap
# Checkout/validate only (fast, no build)
bsp build poky-qemuarm64-scarthgap --checkout
# Build a preset with an extra feature enabled on top of the preset's defaults
bsp build poky-qemuarm64-scarthgap --feature secure-boot
# Build with multiple extra features
bsp build poky-qemuarm64-scarthgap --feature secure-boot --feature ota
# Override the output build directory
bsp build poky-qemuarm64-scarthgap --path /mnt/fast-ssd/build
# Build a specific Bitbake image (overrides registry-configured targets)
bsp build poky-qemuarm64-scarthgap --target core-image-minimal
# Build a specific image and run only the compile task
bsp build poky-qemuarm64-scarthgap --target core-image-minimal --task compile
# Build and deploy artifacts to Azure automatically
bsp build poky-qemuarm64-scarthgap --deploy
# Build and deploy to a specific AWS bucket
bsp build poky-qemuarm64-scarthgap --deploy --deploy-provider aws --deploy-container my-s3-bucket
# Build and trigger LAVA test, wait for result
bsp build poky-qemuarm64-scarthgap --test --wait
# Build with LAVA credential overrides
bsp build poky-qemuarm64-scarthgap --test --wait \
--lava-server https://lava.ci.example.com \
--lava-token $LAVA_TOKEN \
--artifact-url http://files.example.com/builds
shell โ Interactive shell in build environment
bsp shell <bsp_name> [--command COMMAND]
| Option | Description |
|---|---|
--command COMMAND, -c COMMAND |
Execute a specific command instead of starting interactive shell |
Examples:
# Interactive shell
bsp shell poky-qemuarm64-scarthgap
# Execute single command
bsp shell poky-qemuarm64-scarthgap --command "bitbake core-image-minimal"
export โ Export BSP configuration
bsp export <bsp_name> [--output OUTPUT]
bsp export --device <device> --release <release> [--feature FEATURE...] [--output OUTPUT]
| Option | Description |
|---|---|
--output OUTPUT, -o OUTPUT |
Output file path (default: stdout) |
Examples:
# Print to stdout
bsp export poky-qemuarm64-scarthgap
# Save to file
bsp export poky-qemuarm64-scarthgap --output exported-config.yaml
server โ Start an HTTP server (REST + GraphQL)
Starts a FastAPI-based HTTP server that exposes the full BSP registry via both a REST API and a GraphQL API. Requires the server optional extras (pip install "bsp-registry-tools[server]").
bsp server [--host HOST] [--port PORT] [--reload]
| Option | Default | Description |
|---|---|---|
--host HOST |
127.0.0.1 |
Host address to bind to |
--port PORT |
8080 |
Port to listen on |
--reload |
โ | Enable auto-reload on code changes (development mode) |
Once started, the following interfaces are available:
| URL | Description |
|---|---|
http://localhost:8080/docs |
Swagger / OpenAPI UI (REST) |
http://localhost:8080/redoc |
ReDoc UI (REST) |
http://localhost:8080/graphql |
GraphiQL interactive editor (GraphQL) |
http://localhost:8080/api/v1/โฆ |
REST API endpoints |
deploy โ Upload build artifacts to cloud storage
Deploy Yocto build artifacts (images, SDKs) that were produced by bsp build
to Azure Blob Storage or AWS S3.
bsp deploy <bsp_name> [OPTIONS]
bsp deploy --device <d> --release <r> [--feature <f>] [OPTIONS]
| Option | Description |
|---|---|
--provider PROVIDER |
Storage provider: azure (default) or aws |
--container CONTAINER, --bucket CONTAINER |
Azure container or AWS S3 bucket name |
--prefix PREFIX |
Remote path prefix template (supports {device}, {release}, {distro}, {vendor}, {date}, {datetime}) |
--pattern PATTERN |
Glob pattern for artifacts to upload (repeatable; overrides registry config) |
--archive-name NAME |
Bundle artifacts into a single archive with this name before uploading (supports {device}, {release}, {distro}, {vendor}, {date}, {datetime}) |
--archive-format FORMAT |
Archive format: tar.gz (default), tar.bz2, tar.xz, zip |
--dry-run |
List what would be uploaded without uploading (no credentials required) |
gather โ Download build artifacts from cloud storage
Downloads Yocto build artifacts that were previously uploaded by bsp deploy
from Azure Blob Storage or AWS S3 to a local directory.
bsp gather <bsp_name> [OPTIONS]
bsp gather --device <d> --release <r> [--feature <f>] [OPTIONS]
| Option | Description |
|---|---|
--provider PROVIDER |
Storage provider: azure (default) or aws |
--container CONTAINER, --bucket CONTAINER |
Azure container or AWS S3 bucket name |
--prefix PREFIX |
Remote path prefix template (supports {device}, {release}, {distro}, {vendor}, {date}) |
--dest-dir PATH |
Local directory to write downloaded artifacts into (default: registry build path) |
--date DATE |
Override the {date} placeholder in the prefix template (YYYY-MM-DD); defaults to today |
--dry-run |
List what would be downloaded without downloading (no credentials required) |
Examples:
# Download artifacts for a preset build (uses today's date)
bsp gather poky-qemuarm64-scarthgap
# Download artifacts into a specific local directory
bsp gather poky-qemuarm64-scarthgap --dest-dir /mnt/artifacts
# Download artifacts produced on a specific date
bsp gather poky-qemuarm64-scarthgap --date 2025-03-15
# Preview what would be downloaded (dry-run)
bsp gather poky-qemuarm64-scarthgap --dry-run
# Component-based gather
bsp gather --device qemuarm64 --release scarthgap --dest-dir /mnt/artifacts
test โ Submit a LAVA HIL test job
Submits a LAVA job for hardware-in-the-loop testing. By default the job is submitted and the URL is printed; use --wait to block until it completes.
bsp test <bsp_name> [--wait] [--lava-server URL] [--lava-token TOKEN] [--artifact-url URL]
bsp test --device <device> --release <release> [--feature FEATURE...] [--wait] ...
| Option | Description |
|---|---|
--wait |
Block until the LAVA job completes and print per-suite results |
--lava-server URL |
LAVA server base URL (overrides registry lava.server) |
--lava-token TOKEN |
LAVA API authentication token (overrides registry lava.token) |
--artifact-url URL |
Base URL where built image artifacts are accessible to the LAVA lab |
Examples:
# Submit a LAVA job for a pre-built image and exit immediately
bsp test poky-qemuarm64-scarthgap
# Submit and wait for the job to complete
bsp test poky-qemuarm64-scarthgap --wait
# Override LAVA settings from the CLI
bsp test poky-qemuarm64-scarthgap --wait \
--lava-server https://lava.ci.example.com \
--lava-token $LAVA_TOKEN \
--artifact-url http://minio.example.com/builds
# Component-based (no preset needed)
bsp test --device qemuarm64 --release scarthgap --wait
remotes โ Manage named remote registries
bsp remotes manages a persistent list of named remote BSP registry sources,
stored in ~/.config/bsp/remotes.yaml (overridable via the
BSP_REMOTES_CONFIG environment variable). This is modelled after
git remote and integrates with the registry resolution fallback: when no
--remote flag is passed and no local registry file exists, configured remotes
are used automatically.
List remotes
# Short listing โ one name per line
bsp remotes
# Verbose โ include URL and branch
bsp remotes -v
Example output:
advantech
myorg
advantech https://github.com/Advantech-EECC/bsp-registry.git (branch: main)
myorg https://github.com/my-org/bsp-registry.git (branch: develop)
Add a remote
bsp remotes add <name> <url> [--branch BRANCH]
# Add the default Advantech registry under a friendly name
bsp remotes add advantech https://github.com/Advantech-EECC/bsp-registry.git
# Add a private registry on a non-default branch
bsp remotes add myorg https://github.com/my-org/bsp-registry.git --branch develop
Remove a remote
bsp remotes remove <name>
# or: bsp remotes rm <name>
Rename a remote
bsp remotes rename <old-name> <new-name>
Change a remote's URL
bsp remotes set-url <name> <new-url>
# Also update the branch at the same time
bsp remotes set-url <name> <new-url> --branch <branch>
Show details of a remote
bsp remotes show <name>
Example output:
name: myorg
url: https://github.com/my-org/bsp-registry.git
branch: develop
remotes options summary
| Sub-command | Arguments | Description |
|---|---|---|
| (none) | List configured remote names | |
-v / --verbose-list |
Show URL and branch alongside each name | |
add |
<name> <url> [--branch BRANCH] |
Register a new named remote |
remove / rm |
<name> |
Remove a named remote |
rename |
<old-name> <new-name> |
Rename a remote |
set-url |
<name> <url> [--branch BRANCH] |
Update URL (and optionally branch) |
show |
<name> |
Print name, URL and branch for a remote |
Config file location โ
~/.config/bsp/remotes.yaml(override withBSP_REMOTES_CONFIG=/path/to/remotes.yaml bsp remotes ...)
HTTP Server (REST + GraphQL)
The bsp server command exposes the entire BSP registry over HTTP. Both a REST API and a GraphQL API are available simultaneously on the same port.
Installation
pip install "bsp-registry-tools[server]"
Starting the server
# Default: http://127.0.0.1:8080
bsp server
# Custom host/port
bsp server --host 0.0.0.0 --port 9000
# Using a specific registry file
bsp --registry /path/to/bsp-registry.yaml server --host 0.0.0.0 --port 8080
REST API (/api/v1/)
Query endpoints (GET)
| Method | Path | Description |
|---|---|---|
| GET | /api/v1/bsp |
List all BSP presets |
| GET | /api/v1/devices |
List all hardware devices |
| GET | /api/v1/releases |
List all releases |
| GET | /api/v1/releases?device=<slug> |
List releases compatible with a device |
| GET | /api/v1/features |
List all optional features |
| GET | /api/v1/distros |
List all distribution definitions |
| GET | /api/v1/frameworks |
List all framework definitions |
| GET | /api/v1/containers |
List all Docker container definitions |
Example:
curl http://localhost:8080/api/v1/devices
[
{
"slug": "qemuarm64",
"description": "QEMU ARM64 (emulated)",
"vendor": "qemu",
"soc_vendor": "arm",
"soc_family": null,
"includes": ["kas/qemu/qemuarm64.yaml"],
"local_conf": []
}
]
Action endpoints (POST)
| Method | Path | Description |
|---|---|---|
| POST | /api/v1/export |
Resolve and return a BSP config as YAML |
| POST | /api/v1/build |
Trigger a BSP build (blocking) |
| POST | /api/v1/shell |
Run a command inside the build container |
All action endpoints accept a JSON body with either bsp_name or both device + release:
# Export by preset name
curl -X POST http://localhost:8080/api/v1/export \
-H "Content-Type: application/json" \
-d '{"bsp_name": "poky-qemuarm64-scarthgap"}'
# Export by components
curl -X POST http://localhost:8080/api/v1/export \
-H "Content-Type: application/json" \
-d '{"device": "qemuarm64", "release": "scarthgap", "features": []}'
# Validate (checkout only) without building
curl -X POST http://localhost:8080/api/v1/build \
-H "Content-Type: application/json" \
-d '{"bsp_name": "poky-qemuarm64-scarthgap", "checkout_only": true}'
Interactive REST documentation
Navigate to http://localhost:8080/docs for the full Swagger / OpenAPI UI or http://localhost:8080/redoc for ReDoc.
GraphQL API (/graphql)
Navigate to http://localhost:8080/graphql for the interactive GraphiQL editor.
Queries
# List all devices
{ devices { slug description vendor socVendor } }
# List all BSP presets
{ bsp { name description device release features } }
# List releases compatible with a specific device
{ releases(device: "qemuarm64") { slug description yoctoVersion } }
# List features, distros, frameworks, and containers
{ features { slug description compatibleWith }
distros { slug description framework }
frameworks { slug vendor }
containers { name image } }
Mutations
# Export BSP config by preset name
mutation {
exportBsp(bspName: "poky-qemuarm64-scarthgap") {
yamlContent
}
}
# Export by components
mutation {
exportBsp(device: "qemuarm64", release: "scarthgap") {
yamlContent
}
}
# Validate (checkout only) without building
mutation {
buildBsp(bspName: "poky-qemuarm64-scarthgap", checkoutOnly: true) {
status
message
}
}
# Run a command in the build container
mutation {
shellCommand(bspName: "poky-qemuarm64-scarthgap", command: "bitbake -e") {
returnCode
output
}
}
Python API โ embedding the server
You can also embed the server directly in Python code:
import uvicorn
from bsp.server import create_app
app = create_app(registry_path="/path/to/bsp-registry.yaml")
uvicorn.run(app, host="0.0.0.0", port=8080)
Or reuse an already-initialised BspManager:
from bsp import BspManager
from bsp.server import create_app
import uvicorn
manager = BspManager("bsp-registry.yaml")
manager.initialize()
app = create_app(manager=manager)
uvicorn.run(app, host="0.0.0.0", port=8080)
HIL Testing with LAVA and Robot Framework
bsp-registry-tools can submit Hardware-in-the-Loop (HIL) test jobs to a
LAVA server after or independently of a build.
Test jobs are rendered from a Jinja2 template and can run
Robot Framework suites inside the LAVA pipeline.
Configuration overview
LAVA settings live in two places:
- Registry-level
lava:block โ shared server settings (URL, token, timeouts). All values support$ENV{}expansion so credentials are never hardcoded. - Per-preset
testing.lava:block โ device type, artifact URL, LAVA tags, custom job template, and Robot Framework suites.
CLI flags (--lava-server, --lava-token, --artifact-url) override both.
Minimal example
# bsp-registry.yaml
specification:
version: "2.0"
# Registry-level LAVA connection settings
lava:
server: "$ENV{LAVA_SERVER}" # e.g. https://lava.example.com
token: "$ENV{LAVA_TOKEN}" # LAVA API authentication token
username: "$ENV{LAVA_USER}" # LAVA username (optional)
wait_timeout: 3600 # max seconds to wait for a job (default: 1 h)
poll_interval: 30 # polling interval in seconds
registry:
devices:
- slug: qemuarm64
description: "QEMU ARM64"
vendor: qemu
soc_vendor: arm
includes:
- kas/qemu/qemuarm64.yaml
releases:
- slug: scarthgap
description: "Yocto 5.0 LTS"
distro: poky
includes:
- kas/scarthgap.yaml
bsp:
- name: poky-qemuarm64-scarthgap
description: "Poky QEMU ARM64 Scarthgap"
device: qemuarm64
release: scarthgap
build:
container: debian-bookworm
path: build/poky/qemuarm64/scarthgap
# HIL test configuration
testing:
lava:
device_type: "qemu-aarch64" # LAVA device type label
artifact_url: "http://files.ci/builds" # where the image is served
tags: ["hil", "qemu"] # optional LAVA scheduler tags
job_template: "kas/lava/qemu.yaml.j2" # optional; builtin used if omitted
robot:
suites:
- tests/robot/smoke.robot
- tests/robot/boot.robot
variables:
BOARD_IP: "10.0.0.5"
SSH_PORT: "22"
LAVA job templates
When job_template is omitted a built-in minimal template is used (QEMU boot +
optional Robot Framework test action). For real devices, create a Jinja2
template and point job_template at it.
A fully annotated example is provided at
examples/lava/job-template.yaml.j2.
Available Jinja2 context variables:
| Variable | Description |
|---|---|
device_type |
LAVA device type label |
job_name |
Auto-composed from device/release/feature slugs |
image_url |
Full artifact URL (artifact_url + build_path) |
artifact_url |
Base artifact URL |
build_path |
Relative build output directory |
device_slug |
Device slug (e.g. qemuarm64) |
release_slug |
Release slug (e.g. scarthgap) |
feature_slugs |
List of active feature slugs |
lava_tags |
List of LAVA scheduler tags |
robot_suites |
List of Robot Framework .robot file paths |
robot_variables |
Dict of Robot Framework --variable pairs |
timeout_minutes |
Overall job timeout in minutes |
Workflow examples
# Submit a LAVA job after a successful build and wait for results
bsp build poky-qemuarm64-scarthgap --test --wait
# Submit a LAVA test job for an already-built image
bsp test poky-qemuarm64-scarthgap --wait
# Override LAVA settings at the command line (useful in CI)
export LAVA_SERVER=https://lava.ci.example.com
export LAVA_TOKEN=mytoken
bsp test poky-qemuarm64-scarthgap \
--artifact-url http://minio.example.com/builds \
--wait
# Component-based test (no preset required)
bsp test --device qemuarm64 --release scarthgap \
--lava-server https://lava.ci.example.com \
--lava-token $LAVA_TOKEN \
--wait
Python API
from bsp import BspManager, LavaClient, LavaTestSuite, build_lava_job
manager = BspManager("bsp-registry.yaml")
manager.initialize()
# Submit LAVA test and wait for results
passed = manager.test_bsp(
"poky-qemuarm64-scarthgap",
lava_server="https://lava.example.com",
lava_token="mytoken",
artifact_url="http://files.example.com/builds",
wait=True,
)
# Use LavaClient directly
client = LavaClient(server="https://lava.example.com", token="mytoken")
job_id = client.submit_job(job_yaml_string)
health = client.wait_for_job(job_id, timeout=3600, poll_interval=30)
suites: list[LavaTestSuite] = client.get_job_results(job_id)
Registry Configuration Reference
The BSP registry is a YAML file following schema v2.0. See docs/registry-v2.md for the full reference. For the HTTP server reference, see docs/server.md. Key top-level sections:
specification
specification:
version: "2.0"
include (optional)
Split a large registry across multiple files using the include directive.
Paths are relative to the file that contains the directive.
include:
- devices/boards.yaml
- releases/scarthgap.yaml
Each included file is merged before the root file's own content. Lists
(e.g. devices, releases, features, environment) are concatenated; dicts
(e.g. containers, environments) are merged recursively; scalars use the root
file's value.
Included files can themselves contain further include directives, and
circular references are detected at load time.
See docs/registry-v2.md for full details.
environment
Global build environment applied to all builds. Groups variables (supports $ENV{VAR_NAME} expansion) and copy (file-copy entries executed inside the build environment before every build) under a single key.
environment:
variables:
- name: "GITCONFIG_FILE"
value: "$ENV{HOME}/.gitconfig"
- name: "DL_DIR"
value: "$ENV{HOME}/yocto-cache/downloads"
- name: "SSTATE_DIR"
value: "$ENV{HOME}/yocto-cache/sstate"
copy:
- scripts/global-setup.sh: build/
- config/global.conf: build/conf/
Both variables and copy are optional. Global copies are merged first, before named-environment and device copies.
environments
Named environments bundle a container reference, environment variables, and optional file-copy entries together. The special name "default" is used by any release that does not explicitly name an environment.
environments:
default:
container: "debian-bookworm"
variables:
- name: "DL_DIR"
value: "$ENV{HOME}/yocto-cache/downloads"
isar-build:
container: "isar-debian-trixie"
variables:
- name: "DL_DIR"
value: "$ENV{HOME}/isar-cache/downloads"
# Copy the QEMU run script into every Isar build directory
copy:
- isar/scripts/isar-runqemu.sh: build/
containers
Docker container definitions for build environments:
containers:
debian-bookworm:
image: "my-registry/debian/kas:5.1"
file: Dockerfile
args:
- name: "DISTRO"
value: "debian-bookworm"
- name: "KAS_VERSION"
value: "5.1"
isar-container:
image: "my-registry/isar/kas:5.1"
file: Dockerfile.isar
args: []
privileged: true # Run container in privileged mode (required for ISAR builds)
runtime_args: "-p 2222:2222" # Extra flags passed to the container engine
Container fields:
| Field | Type | Default | Description |
|---|---|---|---|
image |
string | โ | Docker image name/tag |
file |
string | โ | Path to Dockerfile for building the image |
args |
list | [] |
Docker build arguments (name/value pairs) |
privileged |
boolean | false |
Run container with elevated privileges. Required for ISAR builds. |
runtime_args |
string | โ | Extra flags appended to the container engine run invocation (e.g. port-forwarding, --device access). Forwarded via --runtime-args. |
registry.devices
Hardware device/board definitions:
registry:
devices:
- slug: qemuarm64
description: "QEMU ARM64 (emulated)"
vendor: qemu
soc_vendor: arm
includes:
- kas/qemu/qemuarm64.yaml
registry.releases
Yocto/Isar release definitions referencing a distro:
registry:
releases:
- slug: scarthgap
description: "Yocto 5.0 LTS (Scarthgap)"
distro: poky
yocto_version: "5.0"
includes:
- kas/scarthgap.yaml
registry.features
Optional feature definitions that can be enabled per-build. Features declare
their own KAS includes and can restrict themselves to specific frameworks/distros
(compatible_with), board vendors (compatibility), and/or releases
(release_overrides).
registry:
features:
- slug: ostree
description: "Enable OSTree support in the Yocto image"
compatible_with: [yocto] # restrict to Yocto framework
includes:
- features/ota/ostree/ostree.yml # always included when feature is enabled
release_overrides:
- release: scarthgap
includes:
- features/ota/ostree/ostree-scarthgap.yml # only for Scarthgap
- release: styhead
includes:
- features/ota/ostree/ostree-styhead.yml # only for Styhead
vendor_overrides:
- vendor: advantech
soc_vendors:
- vendor: nxp
includes:
- features/ota/ostree/modular-bsp-ota-nxp.yml # only for Advantech NXP boards
See docs/registry-v2.md for the full
reference including compatibility, local_conf, env, and all override fields.
registry.bsp
Named presets โ device + release(s) + optional features:
registry:
bsp:
# Single-release preset (backward compatible)
- name: poky-qemuarm64-scarthgap
description: "Poky QEMU ARM64 Scarthgap (Yocto 5.0 LTS)"
device: qemuarm64
release: scarthgap
features: []
build: # optional: override container and/or output path
container: "debian-bookworm"
path: build/poky-qemuarm64-scarthgap
testing: # optional: LAVA HIL test configuration
lava:
device_type: "qemu-aarch64"
artifact_url: "http://files.ci/builds"
tags: ["hil"]
job_template: "kas/lava/qemu.yaml.j2" # optional; builtin used if absent
robot:
suites:
- tests/robot/smoke.robot
variables:
BOARD_IP: "10.0.0.5"
# Multi-release preset: use "releases" (plural) to avoid repeating the
# same entry for every Yocto release. The resolver expands this into one
# preset per release, named "{name}-{release_slug}":
# poky-qemuarm64-scarthgap (auto-composed path: build/poky-qemuarm64-scarthgap)
# poky-qemuarm64-styhead (auto-composed path: build/poky-qemuarm64-styhead)
# The "testing" block (and "deploy", "local_conf", "targets") is inherited
# by every expanded preset, so a single testing block covers all releases.
- name: poky-qemuarm64
description: "Poky QEMU ARM64"
device: qemuarm64
releases: [scarthgap, styhead]
features: [systemd, debug]
build: # optional: container override (path is always auto-composed)
container: "debian-bookworm"
testing: # inherited by poky-qemuarm64-scarthgap AND poky-qemuarm64-styhead
lava:
device_type: "qemu-aarch64"
artifact_url: "http://files.ci/builds"
tags: ["hil", "qemu"]
robot:
suites:
- tests/robot/smoke.robot
variables:
BOARD_IP: "10.0.0.5"
Note:
release(singular) andreleases(plural) are mutually exclusive. Exactly one must be specified per preset entry. Whenreleasesis used, all non-build fields (features,local_conf,targets,deploy,testing) are inherited unchanged by every expanded preset. Build paths are computed per release as{build.path or "build/{name}"}-{release_slug}. You can therefore test every release in the list with a singletestingblock:bsp test poky-qemuarm64-scarthgap --wait bsp test poky-qemuarm64-styhead --wait
deploy (optional)
Global cloud deployment configuration applied to all builds. An individual
BspPreset can also include a deploy: block that overrides specific settings
for that preset (see Per-preset override below).
deploy:
provider: azure # "azure" (default) or "aws"
account_url: $ENV{AZURE_STORAGE_ACCOUNT_URL} # Azure only; supports $ENV{} expansion
container: bsp-artifacts # Azure container name
# bucket: my-s3-bucket # AWS alternative to container
prefix: "{vendor}/{device}/{release}/{date}" # remote path prefix template
patterns: # glob patterns for files to upload
- "**/*.wic.gz"
- "**/*.wic.bz2"
- "**/*.tar.bz2"
- "**/*.ext4"
- "**/*.sdimg"
artifact_dirs: # subdirs under build_path to search
- tmp/deploy/images
- tmp/deploy/sdk
include_manifest: true # upload a JSON manifest of all artifacts
Prefix template variables:
| Variable | Value |
|---|---|
{device} |
Device slug |
{release} |
Release slug |
{distro} |
Effective distro slug |
{vendor} |
Device vendor slug |
{date} |
Build date in YYYY-MM-DD format |
{datetime} |
Build datetime in YYYYMMDD-HHMMSS format |
Per-preset deploy override
Add a deploy: block directly on a BspPreset to override specific global
deploy settings for that preset. Only fields that differ from their default
values override the global config; other fields keep the global value.
CLI flags (--provider, --container, โฆ) are applied last.
deploy: # global: Azure, shared container
provider: azure
account_url: $ENV{AZURE_STORAGE_ACCOUNT_URL}
container: bsp-artifacts
registry:
bsp:
# Uses global settings unchanged.
- name: qemuarm64-scarthgap
device: qemuarm64
release: scarthgap
features: []
# Overrides only container and prefix; provider/account_url come from global.
- name: imx8mp-adv-scarthgap-release
device: imx8mp-adv
release: scarthgap
features: []
deploy:
container: imx8mp-release-artifacts # โ override
prefix: "release/{device}/{release}/{date}" # โ override
patterns:
- "**/*.wic.gz" # โ override
# Switches to AWS entirely for this preset.
- name: aws-build-scarthgap
device: qemuarm64
release: scarthgap
features: []
deploy:
provider: aws # โ override: switch provider
container: my-s3-bucket # โ override: bucket name
See docs/artifact-deployment.md for full details.
lava (optional)
Top-level LAVA server settings shared across all presets. All values support
$ENV{} expansion.
lava:
server: "$ENV{LAVA_SERVER}" # LAVA server base URL (required for bsp test)
token: "$ENV{LAVA_TOKEN}" # API authentication token
username: "$ENV{LAVA_USER}" # Username (optional)
wait_timeout: 3600 # Maximum seconds to wait for a job (default: 3600)
poll_interval: 30 # Polling interval in seconds (default: 30)
lava fields:
| Field | Type | Default | Description |
|---|---|---|---|
server |
string | โ | LAVA server base URL (e.g. https://lava.example.com) |
token |
string | "" |
LAVA API authentication token |
username |
string | "" |
LAVA username |
wait_timeout |
integer | 3600 |
Maximum wait time in seconds when --wait is used |
poll_interval |
integer | 30 |
Job status polling interval in seconds |
KAS Configuration Files
KAS configuration files define Yocto layer repositories, machine settings, and build targets. See the examples/kas/ directory for reference configurations.
QEMU Example Configurations
The examples/ directory contains ready-to-use KAS configurations for QEMU targets:
| File | Description |
|---|---|
examples/kas/yocto/releases/scarthgap.yaml |
Yocto Scarthgap (5.0 LTS) base configuration |
examples/kas/yocto/releases/styhead.yaml |
Yocto Styhead (5.1) base configuration |
examples/kas/yocto/releases/walnascar.yaml |
Yocto Walnascar (5.2) base configuration |
examples/kas/devices/qemu/qemuarm64.yaml |
QEMU ARM64 machine configuration |
examples/kas/devices/qemu/qemuarm.yaml |
QEMU ARM (32-bit) machine configuration |
examples/kas/devices/qemu/qemux86-64.yaml |
QEMU x86-64 machine configuration |
examples/kas/isar/ |
Isar build-system example configurations |
KAS File Structure
header:
version: 14
includes: # Optional: include other KAS files
- base.yaml
distro: poky
machine: qemuarm64
target:
- core-image-minimal
repos:
poky:
url: "https://git.yoctoproject.org/poky"
commit: "abc123..."
path: "layers/poky"
layers:
meta:
meta-poky:
local_conf_header:
my_config: |
DISTRO_FEATURES += "x11"
Python API
You can also use bsp-registry-tools as a Python library:
from bsp import BspManager, EnvironmentManager, KasManager, RegistryFetcher
# Fetch registry from remote (clone on first call, pull on subsequent)
fetcher = RegistryFetcher()
registry_path = fetcher.fetch_registry(
repo_url="https://github.com/Advantech-EECC/bsp-registry.git",
branch="main",
update=True,
)
# Load and manage BSP registry
manager = BspManager(str(registry_path))
manager.initialize()
# List BSPs programmatically
for bsp in manager.model.registry.bsp:
print(f"{bsp.name}: {bsp.description}")
# Get a specific BSP
bsp = manager.get_bsp_by_name("poky-qemuarm64-scarthgap")
# Environment variable management with $ENV{} expansion
from bsp import EnvironmentVariable
env_vars = [
EnvironmentVariable(name="DL_DIR", value="$ENV{HOME}/downloads"),
]
env_manager = EnvironmentManager(env_vars)
print(env_manager.get_value("DL_DIR")) # Expanded path
# Use KasManager directly
kas = KasManager(
kas_files=["kas/scarthgap.yaml", "kas/qemu/qemuarm64.yaml"],
build_dir="build/my-bsp",
use_container=False,
)
kas.validate_kas_files()
Starting the HTTP server programmatically
import uvicorn
from bsp.server import create_app
# Create and run the server (requires bsp-registry-tools[server])
app = create_app(registry_path="bsp-registry.yaml")
uvicorn.run(app, host="0.0.0.0", port=8080)
Cloud Deployment API
from bsp import BspManager
manager = BspManager("bsp-registry.yaml")
manager.initialize()
# Deploy artifacts from a preset build (dry-run)
result = manager.deploy_bsp("poky-qemuarm64-scarthgap", dry_run=True)
print(f"Would upload {result.success_count} artifact(s)")
# Deploy with overrides
result = manager.deploy_bsp(
"poky-qemuarm64-scarthgap",
deploy_overrides={
"provider": "aws",
"container": "my-s3-bucket",
"prefix": "builds/{device}/{release}/{date}",
},
)
for artifact in result.artifacts:
print(f" {artifact.local_path.name} โ {artifact.remote_url}")
# Deploy by components
result = manager.deploy_by_components(
device_slug="qemuarm64",
release_slug="scarthgap",
)
# Use the storage backend and deployer directly
from bsp.storage import create_backend
from bsp.deployer import ArtifactDeployer
from bsp.models import DeployConfig
config = DeployConfig(
provider="azure",
container="bsp-artifacts",
prefix="{device}/{release}/{date}",
patterns=["**/*.wic.gz"],
)
backend = create_backend("azure", container_name="bsp-artifacts", dry_run=True)
deployer = ArtifactDeployer(config, backend)
result = deployer.deploy("build/poky-qemuarm64-scarthgap", device="qemuarm64", release="scarthgap")
Development
Setup Development Environment
git clone https://github.com/Advantech-EECC/bsp-registry-tools.git
cd bsp-registry-tools
pip install -e .
Running Tests
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run with coverage report
pytest --cov=bsp --cov-report=term-missing
# Run specific test class
pytest tests/test_bsp.py::TestEnvironmentManager -v
Project Structure
bsp-registry-tools/
โโโ bsp/
โ โโโ __init__.py # Public API exports
โ โโโ cli.py # CLI entry point
โ โโโ bsp_manager.py # Main BSP coordinator
โ โโโ registry_fetcher.py # Remote registry clone/update
โ โโโ remotes_manager.py # Persistent named-remote CRUD (bsp remotes)
โ โโโ kas_manager.py # KAS build system integration
โ โโโ environment.py # Environment variable management
โ โโโ path_resolver.py # Path utilities
โ โโโ models.py # Dataclass models (v2.0 schema)
โ โโโ resolver.py # V2 resolver: device + release + features โ ResolvedConfig
โ โโโ registry_writer.py # RegistryWriter: CRUD + validation for registry entities
โ โโโ lava_client.py # LAVA REST API wrapper (submit, poll, results)
โ โโโ lava_job_builder.py # Jinja2 LAVA job YAML renderer
โ โโโ gatherer.py # ArtifactGatherer: download build artifacts from cloud
โ โโโ deployer.py # ArtifactDeployer: collect & upload build artifacts
โ โโโ utils.py # YAML / Docker utilities
โ โโโ exceptions.py # Custom exceptions
โ โโโ server/ # Optional HTTP server (requires [server] extras)
โ โ โโโ __init__.py # Exports create_app
โ โ โโโ app.py # FastAPI application factory
โ โ โโโ rest.py # REST router (/api/v1/*)
โ โ โโโ graphql_schema.py # Strawberry GraphQL schema
โ โ โโโ types.py # Pydantic response models
โ โโโ storage/ # Cloud storage backends
โ โโโ __init__.py # Exports CloudStorageBackend and create_backend()
โ โโโ base.py # Abstract CloudStorageBackend base class
โ โโโ azure.py # AzureStorageBackend (azure-storage-blob)
โ โโโ aws.py # AwsStorageBackend (boto3)
โ โโโ factory.py # create_backend() factory function
โโโ pyproject.toml # Package configuration
โโโ README.md # This file
โโโ LICENSE # Apache 2.0 License
โโโ docs/
โ โโโ registry-v2.md # Full v2.0 schema reference
โ โโโ registry-v1.md # Legacy v1.0 schema reference
โ โโโ migration-v1-to-v2.md # Migration guide from v1 to v2
โ โโโ server.md # HTTP server (REST + GraphQL) reference
โ โโโ artifact-deployment.md # Cloud deployment guide (Azure / AWS)
โโโ tests/
โ โโโ conftest.py
โ โโโ test_bsp_manager.py
โ โโโ test_cli_basic.py
โ โโโ test_cli_remote_flags.py
โ โโโ test_deploy.py # Deployment tests
โ โโโ test_gatherer.py # Gather (download) tests
โ โโโ test_lava_client.py # LAVA client unit tests (HTTP mocked)
โ โโโ test_lava_job_builder.py # LAVA job template renderer tests
โ โโโ test_models.py
โ โโโ test_kas_manager.py
โ โโโ test_environment.py
โ โโโ test_path_resolver.py
โ โโโ test_registry_fetcher.py
โ โโโ test_utils.py
โโโ examples/
โ โโโ bsp-registry.yaml # Sample v2.0 BSP registry for QEMU targets
โ โโโ bsp-registry.devices.yaml # Devices include fragment example
โ โโโ lava/
โ โ โโโ job-template.yaml.j2 # Annotated example LAVA job Jinja2 template
โ โโโ kas/
โ โโโ yocto/ # Yocto Project KAS configurations
โ โ โโโ releases/ # Per-release KAS files (scarthgap, styhead, walnascar, โฆ)
โ โ โโโ devices/ # Yocto-specific device KAS files (qemuarm64, qemuarm, โฆ)
โ โ โโโ distro/ # Distro fragments (poky, harden)
โ โ โโโ features/ # Feature KAS files (systemd, debug, ssh, โฆ)
โ โโโ isar/ # Isar build system KAS configurations
โ โโโ devices/qemu/ # Shared QEMU device configurations (qemuarm64, qemux86-64, โฆ)
โ โโโ vendors/qemu/ # Vendor-level shared KAS fragments
โโโ .github/
โโโ workflows/
โโโ tests.yml # CI: run tests on push/PR
โโโ cli-tests.yml # CI: integration CLI tests
โโโ publish.yml # CD: publish to PyPI on release
Publishing to PyPI
This repository uses GitHub Actions for automated publishing.
Setup
-
Create PyPI and TestPyPI accounts and configure Trusted Publishers:
- PyPI: Add GitHub Actions publisher for
Advantech-EECC/bsp-registry-tools - TestPyPI: Same configuration on test.pypi.org
- PyPI: Add GitHub Actions publisher for
-
Create GitHub Environments named
pypiandtestpypiin your repository settings.
Publish Workflow
Automatic (on GitHub Release):
- Creating a non-prerelease GitHub Release automatically publishes to both TestPyPI and PyPI.
- Creating a prerelease publishes to TestPyPI only.
Manual:
GitHub โ Actions โ "Publish to PyPI" โ Run workflow โ Select environment
Build Locally
pip install build
python -m build
# Artifacts are in dist/
Architecture
Classes
| Class | Description |
|---|---|
BspManager |
Main coordinator for BSP operations |
KasManager |
Handles KAS build system operations |
EnvironmentManager |
Manages build environment variables with $ENV{} expansion |
PathResolver |
Utility for path resolution and validation |
RegistryFetcher |
Clones/updates a remote git-hosted BSP registry to a local cache |
RemotesManager |
Reads/writes ~/.config/bsp/remotes.yaml โ CRUD for named remote registry sources |
bsp.server.create_app |
Factory that creates a FastAPI app with REST + GraphQL endpoints |
ArtifactDeployer |
Discovers and uploads Yocto build artifacts to cloud storage |
ArtifactGatherer |
Downloads previously uploaded Yocto build artifacts from cloud storage |
AzureStorageBackend |
Azure Blob Storage backend (requires azure-storage-blob) |
AwsStorageBackend |
AWS S3 backend (requires boto3) |
LavaClient |
LAVA REST API wrapper โ submit, poll, and fetch results for HIL test jobs |
Data Classes
| Class | Description |
|---|---|
RegistryRoot |
Root registry container (specification, registry, containers, environments, deploy, lava) |
Registry |
Contains devices, releases, features, presets, frameworks, and distros |
Device |
Hardware device/board definition (slug, vendor, soc_vendor, includes) |
Release |
Yocto/Isar release definition (slug, distro reference, includes) |
Feature |
Optional BSP feature (slug, includes, compatibility constraints, release_overrides, vendor_overrides) |
BspPreset |
Named preset combining device + release + features + optional deploy and testing configs |
Framework |
Build-system framework definition (e.g. Yocto, Isar) |
Distro |
Linux distribution definition (e.g. Poky, Isar distro) |
Docker |
Docker image, build arg, privileged mode, and runtime_args configuration |
NamedEnvironment |
Named environment bundling a container reference, variables, and optional copy entries |
EnvironmentVariable |
Name/value pair with $ENV{} expansion support |
DeployConfig |
Cloud deployment configuration (provider, container/bucket, prefix, patterns, artifact dirs) |
DeployResult |
Result of a deployment run: list of uploaded artifacts with URLs and checksums |
GatherResult |
Result of a gather run: list of local paths for downloaded artifacts |
LavaServerConfig |
Registry-level LAVA server connection settings (server, token, timeouts) |
LavaTestConfig |
Per-preset LAVA test settings (device_type, artifact_url, tags, job_template, robot) |
RobotTestConfig |
Robot Framework suite list and variable dict embedded in a LAVA job |
TestingConfig |
Top-level testing block on a BspPreset (currently wraps LavaTestConfig) |
Exceptions
| Exception | Description |
|---|---|
ScriptError |
Base exception for all script errors |
ConfigurationError |
Configuration file issues |
BuildError |
Build process failures |
DockerError |
Docker operation failures |
KasError |
KAS operation failures |
License
This project is licensed under the Apache 2.0 License โ see the LICENSE file for details.
Contributing
Contributions are welcome! Please open an issue or submit a pull request on GitHub.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bsp_registry_tools-1.0.0.1.tar.gz.
File metadata
- Download URL: bsp_registry_tools-1.0.0.1.tar.gz
- Upload date:
- Size: 213.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
540ccc1934dab16c88fd1486e600e31ddeec5519e4a2714eb73a5e17fd6915d1
|
|
| MD5 |
11c18afc25b6c98f5ebb26c480ebb8f6
|
|
| BLAKE2b-256 |
4f595d7ccacc7b4e935342a4efd0487dc1f89a30a14adab87a3f315bf534d62b
|
Provenance
The following attestation bundles were made for bsp_registry_tools-1.0.0.1.tar.gz:
Publisher:
publish.yml on Advantech-EECC/bsp-registry-tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bsp_registry_tools-1.0.0.1.tar.gz -
Subject digest:
540ccc1934dab16c88fd1486e600e31ddeec5519e4a2714eb73a5e17fd6915d1 - Sigstore transparency entry: 1437418669
- Sigstore integration time:
-
Permalink:
Advantech-EECC/bsp-registry-tools@c04c08fb811a77cfd29cc1fd79512171a96be1d8 -
Branch / Tag:
refs/tags/v1.0.0.1 - Owner: https://github.com/Advantech-EECC
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@c04c08fb811a77cfd29cc1fd79512171a96be1d8 -
Trigger Event:
release
-
Statement type:
File details
Details for the file bsp_registry_tools-1.0.0.1-py3-none-any.whl.
File metadata
- Download URL: bsp_registry_tools-1.0.0.1-py3-none-any.whl
- Upload date:
- Size: 131.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a9f18cf51518988889400db7bab4fd0592b19fc0a6eb8f10d9d6212cd2600b41
|
|
| MD5 |
62dec3dddf48d5a3f6c20a9eb9f0bd21
|
|
| BLAKE2b-256 |
582c83f0c510aea15fc40d551f01e4b503d5ae20dd36078465e3aaaef13f608c
|
Provenance
The following attestation bundles were made for bsp_registry_tools-1.0.0.1-py3-none-any.whl:
Publisher:
publish.yml on Advantech-EECC/bsp-registry-tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bsp_registry_tools-1.0.0.1-py3-none-any.whl -
Subject digest:
a9f18cf51518988889400db7bab4fd0592b19fc0a6eb8f10d9d6212cd2600b41 - Sigstore transparency entry: 1437418675
- Sigstore integration time:
-
Permalink:
Advantech-EECC/bsp-registry-tools@c04c08fb811a77cfd29cc1fd79512171a96be1d8 -
Branch / Tag:
refs/tags/v1.0.0.1 - Owner: https://github.com/Advantech-EECC
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@c04c08fb811a77cfd29cc1fd79512171a96be1d8 -
Trigger Event:
release
-
Statement type: