Bulletproof Hardware-in-the-Loop testing for firmware teams
Project description
CruciHiL
Bulletproof, easy-to-use Hardware-in-the-Loop (HiL) testing for firmware teams.
Write a test in Python. Run it against simulation before hardware exists. Deploy to real hardware with zero test changes. See results in CI/CD automatically. Ask AI what broke and why.
Why CruciHiL
Legacy HiL tools (dSPACE, NI, Vector) are expensive, slow to configure, and hostile to modern dev workflows. CruciHiL is built for teams that move fast:
- Python-first — no proprietary scripting languages, full IDE support
- Simulation-to-hardware parity — same test file, swap a TOML config
- CI/CD native — runs headless, produces JUnit XML, integrates with GitHub Actions
- AI-powered analysis — MCP server connects Claude/GPT directly to test results and signal traces
Architecture
Layer 6 — Interfaces Web Dashboard · CLI · CI/CD webhooks
Layer 5 — AI Interface MCP Server (FastMCP) — 11 tools, vendor-agnostic
Layer 4 — Cloud Control FastAPI + PostgreSQL — orchestration and history
Layer 3 — Local Agent Test runner · YAML executor · result reporter
Layer 2 — Rig HAL rig.can / rig.sim / rig.someip / rig.doip / rig.ecu
Layer 1 — Hardware CAN · Ethernet · GPIO · Power · ECUs
Test code only ever touches Layer 2. Hardware details live in TOML config, never in test code.
Installation
pip install crucihil
Or from source:
git clone <repo>
cd crucihil
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
CLI Reference
crucihil --help
Commands:
version Show CruciHiL version
run Run a test suite against a rig
agent Start the persistent local agent daemon
init Interactive wizard — create a rig TOML and register with cloud
discover AI-assisted rig setup (probes hardware, generates TOML)
crucihil run — run tests locally
# Run against the virtual simulation rig (no hardware needed)
crucihil run --suite tests/suites/engine_validation.yaml --rig rigs/virtual.toml
# Run against real hardware
crucihil run --suite tests/suites/engine_validation.yaml --rig rigs/my_bench.toml
# With JUnit XML + HTML output
crucihil run --suite tests/suites/engine_validation.yaml --rig rigs/virtual.toml \
--output results.xml --html results.html
crucihil init — set up a new rig
The interactive wizard for configuring a new rig. Run this on the bench machine:
crucihil init
It will:
- Auto-detect CAN and Ethernet interfaces (
ip link) - Prompt for bitrate, FD mode, power backend, DBC path
- Write a validated
rigs/<name>.toml - Optionally register the rig with the cloud control plane (self-registers, saves API key)
After crucihil init, start the agent with no further config:
crucihil agent --rig rigs/<name>.toml
crucihil agent — persistent agent daemon
Runs on the bench machine. Connects to the cloud via WebSocket, receives test run commands, streams results back.
crucihil agent --rig rigs/my_bench.toml
First boot auto-registration: if [rig.cloud] has a registration_token but no api_key, the agent registers itself, saves the key to ~/.crucihil/credentials.toml, and connects — no manual steps needed.
# rigs/my_bench.toml
[rig.cloud]
url = "https://crucihil-server.fly.dev"
registration_token = "your-REGISTRATION_TOKEN-here"
# api_key is saved automatically after first boot
Options:
--rig, -r PATH Path to rig TOML config (required)
--cache PATH SQLite result cache path (default: ~/.crucihil/results.db)
--verbose -v Enable debug logging
Writing Tests
async def test_engine_startup(rig: Rig):
await rig.can.send(message="EngineControl", fields={"Throttle": 50.0, "Mode": 1.0})
result = await rig.can.expect(
signal="EngineData.RPM",
condition=lambda v: v > 800,
timeout=2.0,
)
assert result.passed, result.fail_msg
Switch from virtual to real hardware: change --rig rigs/virtual.toml to --rig rigs/my_bench.toml. The test is unchanged.
Fault injection
async def test_can_dropout_recovery(rig: Rig):
async with rig.fault.inject(rig.fault.can_dropout(arb_id=0x100, duration=1.0)):
await asyncio.sleep(1.0)
result = await rig.can.expect("EngineData.RPM", lambda v: v > 0, timeout=3.0)
assert result.passed, result.fail_msg
ECU firmware flash
async def test_firmware_update(rig: Rig):
result = await rig.ecu.ecu_main.flash("builds/firmware_v2.1.bin")
assert result.success, result.error
Complete Rig Setup Workflow
This is the end-to-end process for bringing a new bench machine online.
Step 1 — Install CruciHiL on the bench machine
pip install crucihil
Step 2 — Run the setup wizard
crucihil init
Follow the prompts. At the end it will ask:
Connect this rig to a CruciHiL cloud server? [y/N]:
Answer y and provide:
- Server URL —
https://crucihil-server.fly.dev(or your self-hosted URL) - Registration token — the
REGISTRATION_TOKENfrom your server setup
The wizard registers the rig, saves the API key locally, and writes the [rig.cloud] section into the TOML. You won't need to handle the key manually.
Step 3 — Start the agent
crucihil agent --rig rigs/<name>.toml
The rig appears as connected in the dashboard within seconds. You can now trigger test runs from the dashboard, CLI, or via AI through the MCP server.
Step 4 (optional) — Install as a systemd service for production
sudo ./scripts/install-agent.sh \
--rig rigs/my_bench.toml \
--server https://crucihil-server.fly.dev \
--key <api-key>
systemctl status crucihil-agent@my-bench
journalctl -u crucihil-agent@my-bench -f
systemctl restart crucihil-agent@my-bench
Cloud Dashboard
The web dashboard is available at https://app.crucihil.io.
First-time setup (self-hosted)
Bootstrap your first org and admin account via the setup API (one-time only):
curl -X POST https://your-server/api/v1/setup \
-H 'Content-Type: application/json' \
-d '{"org_name":"Acme Corp","admin_email":"you@company.com","admin_password":"strong-password"}'
Returns a JWT. Log in at the dashboard with the same email and password.
Inviting team members
Admins can invite members from Settings → Team → Invite member. An email is sent via Resend with a link to set their password. Members can view results and trigger runs; admins can also manage rigs and users.
Connecting an AI client (MCP)
Add to Claude Desktop claude_desktop_config.json:
{
"mcpServers": {
"crucihil": {
"url": "https://crucihil-mcp.fly.dev/sse",
"headers": {
"Authorization": "Bearer <your-CRUCIHIL_API_KEY>"
}
}
}
}
| MCP Tool | What it does |
|---|---|
list_rigs |
List rigs with online/offline status |
get_rig_config |
Hardware summary for one rig |
list_runs |
Query run history |
get_run_summary |
Pass/fail counts and status for one run |
run_test_suite |
Trigger a test suite on a connected rig |
cancel_run |
Cancel an active run |
get_results |
Per-test results (filterable by status) |
get_signal_trace |
Signal telemetry recorded during a run |
describe_failure |
Full failure context in one call — errors + signals + logs |
list_signals |
Parse a DBC and return all signal names |
list_tests |
Parse a YAML manifest and return test metadata |
Self-Hosting with Docker
# Clone and configure
cp .env.example .env
# Edit .env — set POSTGRES_PASSWORD and SECRET_KEY
# Bootstrap everything in one command
./setup.sh
# Start the dashboard dev server (hot reload)
./dev.sh
Dashboard at http://localhost:5173.
./setup.sh --status # service health
./setup.sh --restart # restart containers
./dev.sh --rig rigs/my_bench.toml # register and start a native rig agent
Deploy to Fly.io
fly deploy --config fly.server.toml # control plane
fly deploy --config fly.mcp.toml # MCP server
Set secrets (not in toml files):
fly secrets set SECRET_KEY="..." REGISTRATION_TOKEN="..." RESEND_API_KEY="re_..." --app crucihil-server
Dashboard deploys automatically to Vercel on push to main.
Supported Hardware Backends
| Bus | Backends |
|---|---|
| CAN | socketcan (Linux), peak (PEAK PCAN-USB), virtual |
| SOME/IP | vsomeip, virtual_someip |
| DoIP | python-doip, virtual |
| Power | gpio_relay, bench_psu, virtual_power |
| GPIO | linux_gpio, virtual_gpio |
| Custom | backend = "myorg.module:ClassName" via importlib |
Project Structure
crucihil/
├── hal/ Layer 2: Rig HAL (backends, BSE, namespaces, config)
├── agent/ Layer 3: Test runner, agent daemon, SQLite cache, init wizard
├── server/ Layer 4: FastAPI control plane + PostgreSQL
├── mcp/ Layer 5: MCP server (11 tools, FastMCP 3.x)
└── cli/ Layer 6: CLI entry point
rigs/ Rig TOML configs (hardware details — never in test code)
tests/
├── unit/ Unit tests (361 passing)
└── integration/ Integration tests against virtual rig
scripts/
├── release.sh Cut a new release: ./scripts/release.sh 0.2.0
└── install-agent.sh Install agent as systemd service on a bench machine
Cutting a Release
./scripts/release.sh 0.2.0
Bumps pyproject.toml, commits, tags v0.2.0, pushes. GitHub Actions publishes to PyPI and creates a GitHub Release automatically.
Requirements
- Python 3.11+
- Real hardware additionally requires: Linux (for SocketCAN/GPIO), vsomeip, python-doip, and relevant drivers
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file crucihil-0.5.0.tar.gz.
File metadata
- Download URL: crucihil-0.5.0.tar.gz
- Upload date:
- Size: 282.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
97cc8507802c83439bc5e6751077e7e0f3d537a751ac14275e4c19935dd92f10
|
|
| MD5 |
a72d73eaee4dd59a474f7423233053cc
|
|
| BLAKE2b-256 |
e1f4a29c9498eb18eb239a15862f6e5cf44e3c80f127e14cf36dafe80bc7dc6c
|
File details
Details for the file crucihil-0.5.0-py3-none-any.whl.
File metadata
- Download URL: crucihil-0.5.0-py3-none-any.whl
- Upload date:
- Size: 151.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8db70dfb688350761d7acdeabc66b000b1fe0d557792c9b759cf6ada5faf0b08
|
|
| MD5 |
10a9e741f4305c0b28b7d593cdac9c5c
|
|
| BLAKE2b-256 |
f1aeaf4cfd5a45adbbf51ce48ac5e46cc73bdf417a566fbfb96246d5e2f88dc9
|