Infrastructure migration toolkit: detect → plan → apply
Project description
redeploy
AI Cost Tracking
- 🤖 LLM usage: $6.0000 (40 commits)
- 👤 Human dev: ~$1343 (13.4h @ $100/h, 30min dedup)
Generated on 2026-04-21 using openrouter/qwen/qwen3-coder-next
Infrastructure migration and device deploy toolkit — VPS, Raspberry Pi kiosk, Podman Quadlet, k3s.
redeploy detect → live probe host (what is there now)
redeploy plan → migration-plan.yaml (what to do)
redeploy apply → execute plan (do it)
redeploy run → detect + plan + apply (all at once from spec)
redeploy scan → find devices on LAN (device registry)
redeploy target → deploy to named device (fleet)
Install
# Recommended — installs CLI globally (no venv conflicts)
pipx install redeploy
# Or inside a venv
pip install redeploy
# With doql integration (generates migration.yaml from app.doql):
pip install doql[deploy]
Quick start — VPS production deploy
# 1. Create spec file
cat > migration.yaml << 'EOF'
name: "myapp deploy 1.0.19 → 1.0.20"
source:
strategy: docker_full
host: root@YOUR_VPS_IP
app: myapp
version: "1.0.19"
target:
strategy: docker_full
host: root@YOUR_VPS_IP
app: myapp
version: "1.0.20"
domain: myapp.example.com
env_file: envs/prod.env
compose_files:
- docker-compose.prod.yml
verify_url: https://myapp.example.com/api/v1/health
verify_version: "1.0.20"
EOF
# 2. Preview steps (no SSH needed)
redeploy run migration.yaml --plan-only
# 3. Dry run (connects via SSH, makes no changes)
redeploy run migration.yaml --dry-run
# 4. Full deploy (live detect → plan → apply)
redeploy run migration.yaml --detect
# Or without --detect (faster, uses spec source as-is)
redeploy run migration.yaml
Quick start — Raspberry Pi kiosk
# Register the RPi in the device registry
redeploy device-add pi@192.168.1.42 \
--tag kiosk --tag rpi4 \
--strategy native_kiosk \
--app kiosk-app \
--name "Workshop kiosk #1"
# Preview deploy plan
redeploy target pi@192.168.1.42 migration.yaml --plan-only
# Dry run
redeploy target pi@192.168.1.42 migration.yaml --dry-run
# Deploy
redeploy target pi@192.168.1.42 migration.yaml --detect
Device registry — find and manage devices
# Discover SSH-accessible devices on local network (passive: known_hosts + ARP + mDNS)
redeploy scan
# Active ICMP ping sweep (sends packets)
redeploy scan --ping --subnet 192.168.1.0/24
# Try specific SSH users
redeploy scan --user pi --user ubuntu --timeout 8
# List all known devices
redeploy devices
# Filter by tag or strategy
redeploy devices --tag kiosk
redeploy devices --strategy native_kiosk
redeploy devices --reachable # seen in last 5 minutes
# JSON output for scripting
redeploy devices --json | jq '.[] | select(.tags | index("prod"))'
# Add device manually
redeploy device-add root@10.0.0.5 --tag prod --strategy docker_full --app myapp
# Remove device
redeploy device-rm root@10.0.0.5
Registry is stored at ~/.config/redeploy/devices.yaml (chmod 600 — safe for SSH key paths).
CLI reference
redeploy run SPEC [options]
Execute deploy from a YAML spec file (or redeploy.yaml project manifest if no arg).
| Option | Description |
|---|---|
--plan-only |
Show steps without connecting via SSH |
--dry-run |
Connect, show steps, make no changes |
--detect |
Live-probe host before planning (recommended for prod) |
--env NAME |
Use named environment from redeploy.yaml (e.g. prod, rpi5) |
--plan-out FILE |
Save generated plan to file |
redeploy run --env prod # use prod env from redeploy.yaml
redeploy run --env rpi5 --detect # deploy to rpi5 with live probe
redeploy run --dry-run # uses .env DEPLOY_* vars if no redeploy.yaml
redeploy scan [options]
Discover SSH-accessible devices on the local network.
| Source | Network activity | Requires |
|---|---|---|
known_hosts |
none | ~/.ssh/known_hosts |
arp |
none | ip neigh / arp -a |
mdns |
passive listen | avahi-browse |
ping_sweep |
ICMP — active | --ping flag |
All SSH-reachable devices are saved to registry. Existing entries updated (last_seen, mac, hostname). Old entries never deleted.
redeploy target DEVICE_ID [SPEC] [options]
Deploy a spec to a registered device. Device's host, strategy, app, domain are overlaid onto the spec.
redeploy target pi@192.168.1.42 # uses migration.yaml in cwd
redeploy target pi@192.168.1.42 custom.yaml --dry-run
redeploy target prod-vps --detect --plan-only
After successful deploy, a DeployRecord is saved to the device in registry (timestamp, strategy, version, ok/fail).
redeploy detect / plan / apply / migrate / init / status
redeploy detect --host root@VPS_IP --app myapp -o infra.yaml
redeploy plan --infra infra.yaml --target target.yaml -o plan.yaml
redeploy apply --plan plan.yaml
redeploy migrate --host root@VPS_IP --app myapp --target target.yaml # all in one
redeploy init # scaffold migration.yaml + redeploy.yaml
redeploy status # show project manifest summary
Deployment strategies
| Strategy | Description | Use case |
|---|---|---|
docker_full |
Docker Compose — build + up | VPS production |
podman_quadlet |
Rootless Podman systemd units | Quadlet/rootless VPS |
native_kiosk |
systemd + Chromium Openbox | RPi kiosk (no Docker) |
docker_kiosk |
Podman Quadlet in kiosk mode | RPi kiosk with container |
k3s |
Kubernetes/k3s | K3s cluster |
systemd |
Native systemd service | Bare metal |
native_kiosk plan steps
Generated automatically when strategy: native_kiosk:
rsync_build → sync build/ to device
run_kiosk_installer → bash build/infra/install-kiosk.sh
install_kiosk_service → scp kiosk.service → /etc/systemd/system/
enable_kiosk_service → systemctl enable --now
wait_kiosk_start → 20s
http_health_check → curl http://localhost:8080
docker_kiosk plan steps
rsync_build → sync build/ to device
install_kiosk_quadlet → cp *.container → ~/.config/containers/systemd/ + daemon-reload
start_kiosk_container → systemctl --user restart app.service
wait_kiosk_start → 20s
http_health_check → curl http://localhost:8080
podman_quadlet plan steps
sync_env → scp .env to remote
install_quadlet_files → cp *.container *.network *.volume → ~/.config/containers/systemd/
podman_daemon_reload → systemctl --user daemon-reload
stop_<app> → systemctl --user stop <app>.service
start_<app> → systemctl --user start <app>.service
wait_startup → 15s
http_health_check → verify_url health endpoint
version_check → verify_version match
For system (root) mode, set stop_services: true in target — switches to systemctl (no --user) and /etc/containers/systemd/.
docker_full plan steps
sync_env → scp env_file → remote_dir/.env
docker_build_pull → docker compose build (on remote)
docker_compose_up → docker compose up -d --build
wait_startup → 30s
http_health_check → verify_url health endpoint
version_check → verify_version match
migration.yaml spec format
name: "myapp deploy 1.0.19 → 1.0.20"
description: "Production VPS version bump"
source:
strategy: docker_full # docker_full | podman_quadlet | native_kiosk | docker_kiosk | k3s | systemd
host: root@87.106.87.183 # SSH target (user@ip) or "local"
app: myapp
version: "1.0.19"
domain: myapp.example.com
remote_dir: ~/myapp
target:
strategy: docker_full
host: root@87.106.87.183
app: myapp
version: "1.0.20"
domain: myapp.example.com
remote_dir: ~/myapp
compose_files:
- docker-compose.vps.yml
env_file: envs/vps.env
verify_url: https://myapp.example.com/api/v1/health
verify_version: "1.0.20"
extra_steps: # optional — appended or inserted
- id: flush_k3s_iptables # StepLibrary name — no action needed
insert_before: docker_build_pull # inject before specific step
- id: docker_prune # StepLibrary: prune unused images
- id: notify_slack # custom step (needs action:)
action: ssh_cmd
description: "Send deploy notification"
command: "curl -s -X POST $SLACK_WEBHOOK -d '{\"text\":\"deployed 1.0.20\"}'"
risk: low
StepLibrary — reusable named steps
Reference any step by id alone — no action needed. Fields can be overridden:
extra_steps:
- id: flush_k3s_iptables # use as-is
- id: stop_k3s
- id: http_health_check
url: https://myapp.example.com/health # override url
- id: wait_startup_long # 60s instead of 30s
| ID | Action | Description |
|---|---|---|
flush_k3s_iptables |
ssh_cmd |
Flush CNI-HOSTPORT-DNAT + KUBE-* chains (stale k3s rules block Docker-proxy on 80/443) |
delete_k3s_ingresses |
kubectl_delete |
Delete all k3s ingresses |
stop_k3s |
systemctl_stop |
Stop k3s service |
disable_k3s |
systemctl_disable |
Disable k3s on boot |
stop_nginx |
systemctl_stop |
Stop host nginx (port 80 conflict) |
restart_traefik |
ssh_cmd |
Restart Traefik container |
docker_prune |
ssh_cmd |
Prune unused images + build cache |
docker_compose_down |
docker_compose_down |
Stop Docker Compose stack |
wait_startup |
wait |
Wait 30s |
wait_startup_long |
wait |
Wait 60s |
http_health_check |
http_check |
Verify health endpoint (expect: healthy) |
version_check |
version_check |
Verify deployed version |
sync_env |
scp |
Copy .env to remote |
podman_daemon_reload |
systemctl_start |
systemctl --user daemon-reload |
stop_podman |
systemctl_stop |
Stop all Podman containers via systemd |
enable_podman_unit |
systemctl_start |
systemctl daemon-reload && enable --now {service}.service |
systemctl_restart |
systemctl_start |
Restart a systemd service (command= to override) |
systemctl_daemon_reload |
ssh_cmd |
systemctl daemon-reload |
git_pull |
ssh_cmd |
git pull --ff-only with rollback (git reset --hard HEAD@{1}) |
insert_before
By default extra steps are appended after all generated steps. Use insert_before: <step_id> to inject at a specific position:
extra_steps:
- id: flush_k3s_iptables
insert_before: docker_build_pull # runs before build, not after verify
Plugin system
Extend the step pipeline with custom action types using action: plugin:
extra_steps:
- id: reload_kiosk
action: plugin
plugin_type: browser_reload
description: Reload kiosk browser after deploy
plugin_params:
port: 9222
ignore_cache: true
url_contains: "localhost:8100"
Built-in plugins
plugin_type |
Description | plugin_params |
|---|---|---|
browser_reload |
Reload Chromium via CDP (Chrome DevTools Protocol) over SSH | port (9222), ignore_cache (true), url_contains ("") |
Writing a custom plugin
Place a .py file in ./redeploy_plugins/ (project-local) or ~/.redeploy/plugins/ (user-global):
# ./redeploy_plugins/notify.py
from redeploy.plugins import register_plugin, PluginContext
from redeploy.models import StepStatus
@register_plugin("notify_slack")
def notify_slack(ctx: PluginContext) -> None:
webhook = ctx.params["webhook"]
ctx.probe.run(f"curl -X POST {webhook} -d '{{\"text\":\"deployed!\"}}'")
ctx.step.result = "notified"
ctx.step.status = StepStatus.DONE
PluginContext fields:
| Field | Type | Description |
|---|---|---|
step |
MigrationStep |
Current step — set result and status here |
host |
str |
SSH host (e.g. pi@192.168.1.5) |
probe |
RemoteProbe |
Call probe.run(cmd) for remote SSH commands |
emitter |
ProgressEmitter? |
Emit mid-step progress: emitter.progress(step.id, msg) |
params |
dict |
Shortcut for step.plugin_params |
dry_run |
bool |
Skip side-effects if True |
Inline Scripts
Execute multiline bash scripts directly from YAML without external files:
extra_steps:
- id: configure_kiosk
action: inline_script
description: "Deploy kiosk launch script"
command: |
#!/bin/bash
mkdir -p ~/c2004/config
cat > ~/c2004/config/kiosk-launch.sh << 'EOF'
#!/bin/bash
if command -v chromium-browser >/dev/null 2>&1; then
chromium-browser --kiosk http://localhost:8100
elif command -v firefox >/dev/null 2>&1; then
firefox --kiosk http://localhost:8100
fi
EOF
chmod +x ~/c2004/config/kiosk-launch.sh
risk: medium
timeout: 60
The script is base64-encoded and executed via SSH with automatic temp file cleanup. Use command field for multiline script content (YAML | preserves newlines).
Script References (command_ref)
Instead of duplicating scripts in YAML, reference a script defined in a markdown codeblock:
extra_steps:
- id: configure_kiosk
action: inline_script
description: "Execute kiosk script from markdown"
command_ref: "#kiosk-browser-configuration-script"
risk: medium
In your migration markdown file, define the script in a section:
## Kiosk Browser Configuration Script
```bash
#!/bin/bash
# Auto-detect browser...
if command -v chromium-browser >/dev/null 2>&1; then
chromium-browser --kiosk http://localhost:8100
fi
**Benefits:**
- Single source of truth — script lives in one place (markdown codeblock)
- No duplication between markdown documentation and YAML
- Easy to read and maintain
- Changes to the codeblock automatically apply to the deployment
**Reference formats:**
- `"#section-id"` — script from section in current spec file
- `"./file.md#section-id"` — script from section in specific file
The section ID is derived from the heading: spaces become hyphens, lowercase.
Example: `## Kiosk Browser Configuration Script` → `#kiosk-browser-configuration-script`
### Execute Script by Reference (`redeploy exec`)
Run a single script from markdown without running the full migration:
```bash
# Execute script from codeblock on remote host
redeploy exec '#kiosk-browser-configuration-script' \
--host pi@192.168.188.108 \
--file migration.podman-rpi5-resume.md
# With file in reference
redeploy exec './migration.md#install-deps' --host root@server.com
# Using markpact:ref (more explicit)
redeploy exec 'kiosk-script-id' --host pi@192.168.188.108 --file migration.md
# Dry-run to preview script
redeploy exec '#backup-script' --host pi@192.168.188.108 --file ops.md --dry-run
This is useful for:
- One-off operations defined in markdown docs
- Testing individual scripts before full migration
- Running maintenance tasks
Execute Multiple Scripts (redeploy exec-multi)
Test multiple scripts at once:
# Execute multiple scripts by ref
redeploy exec-multi 'kiosk-script,install-deps,cleanup' \
--host pi@192.168.188.108 \
--file migration.md
# Mix of markpact:ref and section headings
redeploy exec-multi 'script1,#section2,script3' \
--host root@server.com \
--file deploy.md \
--dry-run
Marking Codeblocks with markpact:ref
For more explicit script identification, use markpact:ref <id> in codeblock:
```bash markpact:ref kiosk-browser-configuration-script
#!/bin/bash
# Auto-detect browser...
if command -v chromium-browser >/dev/null 2>&1; then
chromium-browser --kiosk http://localhost:8100
fi
Benefits of `markpact:ref`:
- Explicit ID assignment (not derived from heading)
- Multiple scripts per section
- Can reference by simple ID instead of full heading
- Self-documenting in markdown
Place in project root — `redeploy run` (no args) uses it automatically.
Supports **named environments** for multi-target projects:
```yaml
spec: migration.yaml # default spec file
app: myapp
environments:
prod:
host: root@87.106.87.183
strategy: docker_full
domain: myapp.example.com
env_file: envs/vps.env
verify_url: https://myapp.example.com/api/v1/health
rpi5:
host: pi@192.168.188.108
strategy: systemd
env_file: .env
verify_url: http://192.168.188.108:8000/api/v1/health
dev:
host: local
strategy: docker_full
env_file: .env.local
verify_url: http://localhost:8000/api/v1/health
Fallback: if no redeploy.yaml found, redeploy run reads DEPLOY_* vars from .env:
# .env
DEPLOY_HOST=pi@192.168.1.5
DEPLOY_APP=myapp
DEPLOY_DOMAIN=myapp.local
DEPLOY_ENV_FILE=.env
doql integration
redeploy is the deploy engine for doql declarative apps.
# Install with doql integration
pip install doql[deploy]
# doql build generates build/infra/migration.yaml automatically
DEPLOY_HOST=root@YOUR_VPS doql build
# Then deploy — no args needed
doql deploy # calls redeploy API internally
doql deploy --plan-only
doql deploy --dry-run
doql quadlet --install # installs Quadlet units via redeploy
doql DEPLOY.target → redeploy strategy mapping:
| doql | redeploy |
|---|---|
docker-compose |
docker_full |
quadlet |
podman_quadlet |
kiosk-appliance |
native_kiosk |
kubernetes |
k3s |
Examples
| Directory | Scenario | Strategy |
|---|---|---|
01-vps-version-bump |
VPS Docker version bump | docker_full → docker_full |
02-k3s-to-docker |
Migrate off k3s | k3s → docker_full |
03-docker-to-podman-quadlet |
Move to rootless Podman | docker_full → podman_quadlet |
04-rpi-kiosk |
Raspberry Pi kiosk update | native_kiosk → native_kiosk |
05-iot-fleet-ota |
IoT fleet OTA update | docker_full → docker_full |
09-fleet-yaml |
Fleet with stages + scan | fleet + redeploy target |
11-traefik-tls |
Traefik + Let's Encrypt | docker_full → podman_quadlet |
12-ci-pipeline |
GitHub Actions / GitLab CI | CI-triggered docker_full |
# Run any example in dry-run mode (no SSH required):
redeploy run examples/01-vps-version-bump/migration.yaml --plan-only
redeploy run examples/04-rpi-kiosk/migration.yaml --plan-only
License
Licensed under Apache-2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file redeploy-0.2.21.tar.gz.
File metadata
- Download URL: redeploy-0.2.21.tar.gz
- Upload date:
- Size: 239.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
78559ea5745186cf98f271153da96133210d45363445fd8fc0414ee73bb0bca2
|
|
| MD5 |
64b8f362c05879aa7011ebc8114d6ced
|
|
| BLAKE2b-256 |
427141f6cf5826034a5a87e8947552bbf19fca4682f1c13a0cbe73f3027352ed
|
File details
Details for the file redeploy-0.2.21-py3-none-any.whl.
File metadata
- Download URL: redeploy-0.2.21-py3-none-any.whl
- Upload date:
- Size: 281.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df2c9685135a0589106993d76988a5735f86d2bc9b251686b143eaa930fd5380
|
|
| MD5 |
10073e20c47c2a0d95239f41cc7b8b31
|
|
| BLAKE2b-256 |
303c8e745b79e13e845a9363b41455aee94f2c7803216fbdee5fb36de2e15282
|