Bulletproof Hardware-in-the-Loop testing for firmware teams
Project description
CruciHiL
Bulletproof, easy-to-use Hardware-in-the-Loop (HiL) testing for firmware teams.
Write a test in Python. Run it against simulation before hardware exists. Deploy to real hardware with zero test changes. See results in CI/CD automatically. Ask AI what broke and why.
Why CruciHiL
Legacy HiL tools (dSPACE, NI, Vector) are expensive, slow to configure, and hostile to modern dev workflows. CruciHiL is built for teams that move fast:
- Python-first — no proprietary scripting languages, full IDE support
- Simulation-to-hardware parity — same test file, swap a TOML config
- CI/CD native — runs headless, produces JUnit XML, integrates with GitHub Actions
- AI-powered analysis — MCP server connects Claude/GPT directly to test results and signal traces
Architecture
Layer 6 — Interfaces Web Dashboard · CLI · CI/CD webhooks
Layer 5 — AI Interface MCP Server (FastMCP) — 11 tools, vendor-agnostic
Layer 4 — Cloud Control FastAPI + PostgreSQL — orchestration and history
Layer 3 — Local Agent Test runner · YAML executor · result reporter
Layer 2 — Rig HAL rig.can / rig.sim / rig.someip / rig.doip / rig.ecu
Layer 1 — Hardware CAN · Ethernet · GPIO · Power · ECUs
Test code only ever touches Layer 2. Hardware details live in TOML config, never in test code.
Installation
pip install crucihil
Or from source:
git clone <repo>
cd crucihil
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
Quick start (no hardware needed)
# 1. Generate a runnable example
crucihil scaffold --example hello_world
# 2. Run it immediately — virtual backend, no hardware required
cd examples/hello_world
crucihil run --suite suites/hello.yaml --rig rigs/virtual.toml -v
Three built-in examples demonstrate the full framework:
| Example | What it shows |
|---|---|
hello_world |
Minimal install check — two tests, no DBC |
can_signals |
BSE simulation + DBC + rig.can.expect() assertions |
fault_injection |
FaultDescriptor pattern, sim.override(), power cycle |
CLI Reference
crucihil --help
Commands:
version Show CruciHiL version
run Run a test suite against a rig
scaffold Generate a test project, runnable example, or backend adapter stub
init Interactive wizard — create a rig TOML, optionally register with cloud
discover AI-assisted rig setup (probes hardware, generates TOML)
agent Start the persistent local agent daemon
deregister Remove a rig's saved API key from local credentials
crucihil run
Run a YAML test suite against a rig TOML.
crucihil run --suite suites/engine.yaml --rig rigs/virtual.toml
crucihil run --suite suites/engine.yaml --rig rigs/my_bench.toml --verbose
crucihil run --suite suites/engine.yaml --rig rigs/my_bench.toml \
--output results.xml --html results.html
Options:
--suite, -s PATH Path to YAML suite manifest (required)
--rig, -r PATH Path to rig TOML config (required)
--output,-o PATH Write JUnit XML results here
--html PATH Write self-contained HTML report here
--tags TAGS Comma-separated tags — only run matching tests
--suite-type TYPES Comma-separated suite types — only run matching tests
--verbose,-v Show per-test status and debug logs
Exit codes: 0 = all passed, 1 = one or more failed, 2 = framework error.
Filtering examples:
# Run only tests tagged 'smoke'
crucihil run --suite suites/regression.yaml --rig rigs/virtual.toml --tags smoke
# Run only regression suite-type tests
crucihil run --suite suites/all.yaml --rig rigs/my_bench.toml --suite-type regression
# Combine: smoke tests on a specific interface
crucihil run --suite suites/all.yaml --rig rigs/my_bench.toml --tags smoke,can
Module resolution: crucihil run adds the current directory and the suite file's parent directory to sys.path, so module: tests.smoke in your YAML resolves against your project root automatically. No PYTHONPATH setup needed.
crucihil scaffold
Three modes in one command.
Mode 1 — Test project from a rig TOML
crucihil scaffold --rig rigs/my_rig.toml
crucihil scaffold --rig rigs/my_rig.toml --output-dir /path/to/project
Reads the TOML, discovers what hardware is configured, and generates a runnable test project:
suites/smoke.yaml — quick health checks, one per hardware section
suites/regression.yaml — full coverage suite
tests/__init__.py
tests/smoke.py — documented Python stubs with rig.can.expect() patterns
tests/regression.py — regression stubs with fault injection patterns
Power and GPIO control is placed in YAML setup:/teardown: steps (declarative). Python functions contain only assertions.
Mode 2 — Runnable examples (no hardware required)
crucihil scaffold --example hello_world # minimal install check
crucihil scaffold --example can_signals # BSE simulation + signal assertions
crucihil scaffold --example fault_injection # FaultDescriptor + sim.override()
Each example includes a virtual rig TOML, YAML suite, Python test file, and (for can/fault examples) an example DBC. Run immediately with no setup:
cd examples/can_signals
crucihil run --suite suites/can_signals.yaml --rig rigs/virtual.toml -v
Mode 3 — Custom backend adapter stub
crucihil scaffold --adapter power --name RelayBoard
crucihil scaffold --adapter can --name PeakUSB
crucihil scaffold --adapter gpio --name FTDIBoard
crucihil scaffold --adapter doip --name MyDoIPGW
crucihil scaffold --adapter someip --name VSomeIPProxy
crucihil scaffold --adapter udp --name SensorStream
crucihil scaffold --adapter uds --name CANIsotpClient
Generates a Python class stub implementing the HAL ABC for that backend type. All abstract methods are stubbed with docstrings describing what each must do. The command also prints the exact TOML snippet for wiring it in:
$ crucihil scaffold --adapter power --name RelayBoard
wrote relay_board_backend.py
Reference it in your rig TOML:
[rig.power.ecu_main]
backend = "mypackage.relay_board.RelayBoardBackend"
default = "off"
--output-dir (all modes): directory to write files into. Defaults to ..
crucihil init
Interactive wizard — creates a validated rig TOML and optionally registers the rig with the cloud.
crucihil init
crucihil init --output-dir rigs/
What it does:
- Asks for rig name and platform
- Asks: virtual simulation or real hardware?
- Virtual — generates a complete working TOML instantly, no further prompts
- Hardware — walks through each section:
- CAN interfaces (auto-detected from
ip link), bitrate presets (125k / 250k / 500k / 1M / 2M / 5M), FD mode, backend - Ethernet interfaces for DoIP/SOME/IP
- Power rails — name each rail (e.g.
12v_supply,5v_logic), pick backend, supports multiple - ECUs — name each ECU, set logical address, transport (DoIP/CAN-ISOtp), power rail reference, supports multiple
- DBC file path
- CAN interfaces (auto-detected from
- Shows the generated TOML for review
- Asks for confirmation before writing
- Optionally registers with the cloud (email + password login, saves API key to
~/.crucihil/credentials.toml)
After init, run crucihil scaffold --rig rigs/<name>.toml to get a test project.
crucihil discover
AI-assisted rig setup — probes the system and generates a TOML.
crucihil discover
crucihil discover --no-ai # stub TOML from probe results, no API key needed
crucihil discover --provider openai
crucihil discover --provider gemini
crucihil discover --describe "Orin NX, two CAN buses on can0/can1, DoIP on eth0"
crucihil discover --model claude-haiku-4-5-20251001
Options:
--output-dir, -d PATH Directory to write generated TOML (default: rigs/)
--provider, -p NAME AI provider: 'anthropic', 'openai', or 'gemini' (auto-detected from env)
--describe TEXT Hardware description passed to AI (skips interactive prompt)
--no-ai Skip AI, generate stub TOML from probe results only
--model NAME AI model override (default: claude-sonnet-4-6 / gpt-4o / gemini-2.0-flash)
API key lookup order: ANTHROPIC_API_KEY → OPENAI_API_KEY → GOOGLE_API_KEY → interactive prompt.
Generated TOML is validated against the RigConfig schema before write. Validation errors are shown as warnings — you can still write and edit manually.
crucihil agent
Runs on the bench machine. Connects to the cloud control plane via WebSocket, receives test run commands, streams results back.
crucihil agent --rig rigs/my_bench.toml
crucihil agent --rig rigs/my_bench.toml --verbose
Options:
--rig, -r PATH Path to rig TOML config (required)
--cache PATH SQLite result cache path (default: ~/.crucihil/results.db)
--verbose,-v Enable debug logging
First-boot auto-registration: if [rig.cloud] contains a registration_token but no api_key, the agent registers itself on first boot, saves the key to ~/.crucihil/credentials.toml, and connects. No manual steps needed.
[rig.cloud]
url = "https://crucihil-server.fly.dev"
registration_token = "your-token-here"
# api_key is written automatically after first boot
Without [rig.cloud], the agent runs in local-only mode (no cloud sync).
crucihil deregister
Remove a rig's saved API key from the local credentials store. Run this after deleting a rig from the dashboard.
crucihil deregister my_rig
crucihil deregister my_rig --server https://crucihil-server.fly.dev
Options:
RIG_NAME (positional) Rig name to remove from ~/.crucihil/credentials.toml
--server URL Server URL the rig was registered with (default: cloud server)
Writing Tests
Test functions are plain async Python. The rig object is injected by the framework — never constructed in test code.
from crucihil.hal.rig import Rig
from crucihil.hal.models.exceptions import BlockedError
async def test_engine_startup(rig: Rig, expected_rpm: float = 800.0) -> None:
result = await rig.can.expect(
signal="EngineData.RPM",
condition=lambda v: v > expected_rpm,
timeout=2.0,
)
assert result.passed, result.fail_msg
Switch from virtual to real hardware: change --rig rigs/virtual.toml to --rig rigs/my_bench.toml. The test is unchanged.
Status rules:
assertfails →status = "fail"— firmware bug, counts against pass rateraise BlockedError("msg")→status = "blocked"— precondition failed, does NOT count against pass rate- Clean return →
status = "pass"
Rig HAL API
# CAN
await rig.can.send(message="EngineControl", fields={"Throttle": 50.0})
result = await rig.can.expect(signal="EngineData.RPM", condition=lambda v: v > 800, timeout=2.0)
# Simulation (virtual backend)
await rig.sim.set("EngineData.RPM", 2500.0)
rig.sim.start("EngineData") # start BSE cyclic transmission
rig.sim.stop("EngineData")
async with rig.sim.override("EngineData.RPM", 5500.0):
... # value restored on exit, even on exception
# Fault injection — FaultDescriptor pattern (NOT a coroutine)
async with rig.fault.inject(rig.fault.can_dropout(arb_id=0x100, duration=1.0)):
await asyncio.sleep(1.0)
async with rig.fault.inject(rig.fault.power_cycle(rail="ecu_main", off_duration=0.5)):
await asyncio.sleep(0.5)
# ECU diagnostics (DoIP)
response = await rig.ecu["ecu_main"].uds.ecu_reset(reset_type=0x01)
assert response.positive, f"ECU reset failed: {response}"
Suite YAML format
Tests are declared in YAML — hardware setup, metadata, and filtering. Python functions contain only assertions.
suite:
name: engine_validation
version: "1.0.0"
defaults:
timeout: 30.0
suite_types: [regression]
tests:
- id: engine_startup
name: Engine startup
tags: [smoke, engine]
priority: critical # critical / high / medium / low
depends_on: [] # skip if any listed test failed
suite_types: [smoke, regression]
setup:
- power.on: ecu_main
- sim.set: { signal: "EngineData.RPM", value: 0.0 }
- sim.start: EngineData
teardown:
- sim.stop: EngineData
- power.off: ecu_main
module: tests.engine # dotted module path from project root
function: test_engine_startup
params:
expected_rpm: 800.0 # forwarded as kwargs to the function
YAML setup/teardown actions:
- sim.set: { signal: "Msg.Sig", value: 0.0 }
- sim.start: MessageName
- sim.stop: MessageName
- sim.start_all: # start all configured messages
- power.on: rail_name
- power.off: rail_name
- gpio.set: { pin: ignition_enable, value: true }
Rig TOML format
Hardware details go in TOML, never in test code.
[rig]
name = "my_bench"
platform = "orin_nx"
spec_version = "1.0"
backend = "hardware" # or "virtual" for simulation
[rig.can.can0]
interface = "can0"
bitrate = 500000
fd = false
backend = "socketcan" # socketcan / peak / virtual / <module.path.ClassName>
[rig.ethernet.eth0]
interface = "eth0"
ip = "169.254.0.1"
someip_backend = "python-someip"
doip_backend = "python-doip"
[rig.power.ecu_main]
backend = "gpio_relay" # virtual_power / gpio_relay / bench_psu / <module.path.ClassName>
default = "off"
gpio_pin = 17 # for gpio_relay backend
[rig.gpio]
ignition_enable = { pin = 22, direction = "out", default = false, backend = "linux_gpio" }
[rig.ecus.ecu_main]
name = "Main ECU"
logical_address = 0x0001
transport = "doip" # doip or can_isotp
doip_interface = "eth0"
power_rail = "ecu_main" # optional — links power control to this ECU
boot_timeout = 10.0 # seconds — hardware-specific, never in test code
[rig.definitions]
can_dbc = "defs/vehicle_can.dbc"
[rig.cloud]
url = "https://crucihil-server.fly.dev"
# api_key is stored in ~/.crucihil/credentials.toml after first boot
Custom Hardware Backends
CruciHiL ships virtual and common reference backends. For custom hardware, implement the relevant ABC:
# Generate a stub for any backend type
crucihil scaffold --adapter power --name RelayBoard # → relay_board_backend.py
crucihil scaffold --adapter can --name MyUSBAdaptor
crucihil scaffold --adapter gpio --name FTDIBoard
# relay_board_backend.py
from crucihil.hal.backends.base import AbstractPowerBackend
class RelayBoardBackend(AbstractPowerBackend):
async def connect(self) -> None: ...
async def disconnect(self) -> None: ...
async def on(self) -> None: ...
async def off(self) -> None: ...
async def read_voltage(self) -> float: return 12.0
async def set_voltage(self, voltage: float) -> None: ...
Reference it in your TOML:
[rig.power.ecu_main]
backend = "mypackage.relay_board_backend.RelayBoardBackend"
The class is loaded via importlib at rig connect time. Any package on sys.path works.
Adapter types: can · power · gpio · doip · someip · udp · uds
Supported Hardware Backends
| Bus | Backends |
|---|---|
| CAN | socketcan (Linux), peak (PEAK PCAN-USB), virtual |
| SOME/IP | vsomeip, python-someip (virtual) |
| DoIP | python-doip, virtual |
| Power | gpio_relay, bench_psu, virtual_power |
| GPIO | linux_gpio, virtual_gpio |
| Custom | <dotted.module.path.ClassName> via importlib |
Complete Rig Setup Workflow
New project with virtual simulation
# 1. Install
pip install crucihil
# 2. Create a virtual rig
crucihil init # choose mode [1] Virtual
# 3. Generate test project
crucihil scaffold --rig rigs/my_rig.toml
# 4. Run
crucihil run --suite suites/smoke.yaml --rig rigs/my_rig.toml -v
Bringing a real bench machine online
# On the bench machine
pip install crucihil
# Run the setup wizard — mode [2] Hardware
crucihil init
# At the end, answer y to cloud registration
# Provide your app.crucihil.io email + password when prompted
# Start the agent
crucihil agent --rig rigs/<name>.toml
The rig appears as connected in the dashboard within seconds.
Production deployment (systemd)
sudo ./scripts/install-agent.sh \
--rig rigs/my_bench.toml \
--server https://crucihil-server.fly.dev \
--key <api-key>
systemctl status crucihil-agent@my-bench
journalctl -u crucihil-agent@my-bench -f
Cloud Dashboard
Available at https://app.crucihil.io.
First-time setup (self-hosted)
curl -X POST https://your-server/api/v1/setup \
-H 'Content-Type: application/json' \
-d '{"org_name":"Acme","admin_email":"you@company.com","admin_password":"strong-password"}'
Returns a JWT. Log in at the dashboard with the same email and password.
Inviting team members
Admins: Settings → Team → Invite member. Members can view results and trigger runs; admins can also manage rigs and users.
Connecting an AI client (MCP)
Add to Claude Desktop claude_desktop_config.json:
{
"mcpServers": {
"crucihil": {
"url": "https://crucihil-mcp.fly.dev/sse",
"headers": {
"Authorization": "Bearer <your-api-key>"
}
}
}
}
| MCP Tool | What it does |
|---|---|
list_rigs |
List rigs with online/offline status |
get_rig_config |
Hardware summary for one rig |
list_runs |
Query run history |
get_run_summary |
Pass/fail counts and status for one run |
run_test_suite |
Trigger a test suite on a connected rig |
cancel_run |
Cancel an active run |
get_results |
Per-test results (filterable by status) |
get_signal_trace |
Signal telemetry recorded during a run |
describe_failure |
Full failure context in one call — errors + signals + logs |
list_signals |
Parse a DBC and return all signal names |
list_tests |
Parse a YAML manifest and return test metadata |
generate_test_suite |
Scaffold YAML + Python stubs; pass context_items for AI-generated assertions |
Self-Hosting with Docker
cp .env.example .env # set POSTGRES_PASSWORD and SECRET_KEY
./setup.sh # bootstrap containers + first-run migration
./dev.sh # start dev server (hot reload) at localhost:5173
./setup.sh --status # service health
./setup.sh --restart # restart containers
./dev.sh --rig rigs/my_bench.toml # register + start a native agent
Deploy to Fly.io + Vercel
fly deploy --config fly.server.toml # control plane
fly deploy --config fly.mcp.toml # MCP server
# Dashboard auto-deploys to Vercel on push to main
Set secrets (never in toml files):
fly secrets set SECRET_KEY="..." REGISTRATION_TOKEN="..." RESEND_API_KEY="re_..." \
--app crucihil-server
Project Structure
crucihil/
├── hal/ Layer 2: Rig HAL (backends, BSE, namespaces, config)
├── agent/ Layer 3: Test runner, agent daemon, SQLite cache, init wizard
├── server/ Layer 4: FastAPI control plane + PostgreSQL
├── mcp/ Layer 5: MCP server (11 tools, FastMCP 3.x)
└── cli/ Layer 6: CLI entry point
rigs/ Rig TOML configs (hardware details — never in test code)
tests/
├── unit/ Unit tests
└── integration/ Integration tests against virtual rig
scripts/
├── release.sh Cut a release: ./scripts/release.sh 0.5.0
└── install-agent.sh Install agent as systemd service on a bench machine
Cutting a Release
./scripts/release.sh 0.5.0
Bumps pyproject.toml, commits, tags v0.5.0, pushes. GitHub Actions publishes to PyPI and creates a GitHub Release automatically.
Requirements
- Python 3.10+
- Linux required for real hardware (SocketCAN, GPIO)
- vsomeip library for SOME/IP hardware backend
- python-doip for DoIP hardware backend
- PEAK drivers for PEAK PCAN adapters
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file crucihil-0.10.0.tar.gz.
File metadata
- Download URL: crucihil-0.10.0.tar.gz
- Upload date:
- Size: 325.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
560764915ad46dbdc8ddbd90bfe2cb1c5b1acb1f3c03077990c8388789f66a00
|
|
| MD5 |
e20d420aeac97bbd4174ed647df3ca7a
|
|
| BLAKE2b-256 |
b6e9ad0e89e6962cee4bd5baf3db9cf7a6c25ba7fe91df78504ea2ee64f677ad
|
File details
Details for the file crucihil-0.10.0-py3-none-any.whl.
File metadata
- Download URL: crucihil-0.10.0-py3-none-any.whl
- Upload date:
- Size: 174.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
56e6b0b31929b6e5f3624805a655e83bd93a12c41bcd053439dbd1ed78be4cf8
|
|
| MD5 |
39b17fc46926e4436ffb1fa95e04b911
|
|
| BLAKE2b-256 |
882e2675860260e5deb69a945a3d8abbfea4eeff097424551c854174c53b3901
|