Skip to main content

Infrastructure migration toolkit: detect → plan → apply

Project description

redeploy

AI Cost Tracking

PyPI Version Python License AI Cost Human Time Model

  • 🤖 LLM usage: $1.9500 (13 commits)
  • 👤 Human dev: ~$333 (3.3h @ $100/h, 30min dedup)

Generated on 2026-04-20 using openrouter/qwen/qwen3-coder-next


PyPI Version Python License

Infrastructure migration and device deploy toolkit — VPS, Raspberry Pi kiosk, Podman Quadlet, k3s.

redeploy detect   →  live probe host        (what is there now)
redeploy plan     →  migration-plan.yaml    (what to do)
redeploy apply    →  execute plan           (do it)
redeploy run      →  detect + plan + apply  (all at once from spec)
redeploy scan     →  find devices on LAN    (device registry)
redeploy target   →  deploy to named device (fleet)

Install

# Recommended — installs CLI globally (no venv conflicts)
pipx install redeploy

# Or inside a venv
pip install redeploy

# With doql integration (generates migration.yaml from app.doql):
pip install doql[deploy]

Quick start — VPS production deploy

# 1. Create spec file
cat > migration.yaml << 'EOF'
name: "myapp deploy 1.0.19 → 1.0.20"
source:
  strategy: docker_full
  host: root@YOUR_VPS_IP
  app: myapp
  version: "1.0.19"
target:
  strategy: docker_full
  host: root@YOUR_VPS_IP
  app: myapp
  version: "1.0.20"
  domain: myapp.example.com
  env_file: envs/prod.env
  compose_files:
    - docker-compose.prod.yml
  verify_url: https://myapp.example.com/api/v1/health
  verify_version: "1.0.20"
EOF

# 2. Preview steps (no SSH needed)
redeploy run migration.yaml --plan-only

# 3. Dry run (connects via SSH, makes no changes)
redeploy run migration.yaml --dry-run

# 4. Full deploy (live detect → plan → apply)
redeploy run migration.yaml --detect

# Or without --detect (faster, uses spec source as-is)
redeploy run migration.yaml

Quick start — Raspberry Pi kiosk

# Register the RPi in the device registry
redeploy device-add pi@192.168.1.42 \
  --tag kiosk --tag rpi4 \
  --strategy native_kiosk \
  --app kiosk-app \
  --name "Workshop kiosk #1"

# Preview deploy plan
redeploy target pi@192.168.1.42 migration.yaml --plan-only

# Dry run
redeploy target pi@192.168.1.42 migration.yaml --dry-run

# Deploy
redeploy target pi@192.168.1.42 migration.yaml --detect

Device registry — find and manage devices

# Discover SSH-accessible devices on local network (passive: known_hosts + ARP + mDNS)
redeploy scan

# Active ICMP ping sweep (sends packets)
redeploy scan --ping --subnet 192.168.1.0/24

# Try specific SSH users
redeploy scan --user pi --user ubuntu --timeout 8

# List all known devices
redeploy devices

# Filter by tag or strategy
redeploy devices --tag kiosk
redeploy devices --strategy native_kiosk
redeploy devices --reachable          # seen in last 5 minutes

# JSON output for scripting
redeploy devices --json | jq '.[] | select(.tags | index("prod"))'

# Add device manually
redeploy device-add root@10.0.0.5 --tag prod --strategy docker_full --app myapp

# Remove device
redeploy device-rm root@10.0.0.5

Registry is stored at ~/.config/redeploy/devices.yaml (chmod 600 — safe for SSH key paths).

CLI reference

redeploy run SPEC [options]

Execute deploy from a YAML spec file (or redeploy.yaml project manifest if no arg).

Option Description
--plan-only Show steps without connecting via SSH
--dry-run Connect, show steps, make no changes
--detect Live-probe host before planning (recommended for prod)
--plan-out FILE Save generated plan to file

redeploy scan [options]

Discover SSH-accessible devices on the local network.

Source Network activity Requires
known_hosts none ~/.ssh/known_hosts
arp none ip neigh / arp -a
mdns passive listen avahi-browse
ping_sweep ICMP — active --ping flag

All SSH-reachable devices are saved to registry. Existing entries updated (last_seen, mac, hostname). Old entries never deleted.

redeploy target DEVICE_ID [SPEC] [options]

Deploy a spec to a registered device. Device's host, strategy, app, domain are overlaid onto the spec.

redeploy target pi@192.168.1.42                           # uses migration.yaml in cwd
redeploy target pi@192.168.1.42 custom.yaml --dry-run
redeploy target prod-vps --detect --plan-only

After successful deploy, a DeployRecord is saved to the device in registry (timestamp, strategy, version, ok/fail).

redeploy detect / plan / apply / migrate / init / status

redeploy detect --host root@VPS_IP --app myapp -o infra.yaml
redeploy plan   --infra infra.yaml --target target.yaml -o plan.yaml
redeploy apply  --plan plan.yaml
redeploy migrate --host root@VPS_IP --app myapp --target target.yaml  # all in one
redeploy init                        # scaffold migration.yaml + redeploy.yaml
redeploy status                      # show project manifest summary

Deployment strategies

Strategy Description Use case
docker_full Docker Compose — build + up VPS production
podman_quadlet Rootless Podman systemd units Quadlet/rootless VPS
native_kiosk systemd + Chromium Openbox RPi kiosk (no Docker)
docker_kiosk Podman Quadlet in kiosk mode RPi kiosk with container
k3s Kubernetes/k3s K3s cluster
systemd Native systemd service Bare metal

native_kiosk plan steps

Generated automatically when strategy: native_kiosk:

rsync_build            → sync build/ to device
run_kiosk_installer    → bash build/infra/install-kiosk.sh
install_kiosk_service  → scp kiosk.service → /etc/systemd/system/
enable_kiosk_service   → systemctl enable --now
wait_kiosk_start       → 20s
http_health_check      → curl http://localhost:8080

docker_kiosk plan steps

rsync_build            → sync build/ to device
install_kiosk_quadlet  → cp *.container → ~/.config/containers/systemd/ + daemon-reload
start_kiosk_container  → systemctl --user restart app.service
wait_kiosk_start       → 20s
http_health_check      → curl http://localhost:8080

podman_quadlet plan steps

sync_env               → scp .env to remote
install_quadlet_files  → cp *.container *.network *.volume → ~/.config/containers/systemd/
podman_daemon_reload   → systemctl --user daemon-reload
stop_<app>             → systemctl --user stop <app>.service
start_<app>            → systemctl --user start <app>.service
wait_startup           → 15s
http_health_check      → verify_url health endpoint
version_check          → verify_version match

For system (root) mode, set stop_services: true in target — switches to systemctl (no --user) and /etc/containers/systemd/.

docker_full plan steps

sync_env               → scp env_file → remote_dir/.env
docker_build_pull      → docker compose build (on remote)
docker_compose_up      → docker compose up -d --build
wait_startup           → 30s
http_health_check      → verify_url health endpoint
version_check          → verify_version match

migration.yaml spec format

name: "myapp deploy 1.0.19  1.0.20"
description: "Production VPS version bump"

source:
  strategy: docker_full       # docker_full | podman_quadlet | native_kiosk | docker_kiosk | k3s | systemd
  host: root@87.106.87.183   # SSH target (user@ip) or "local"
  app: myapp
  version: "1.0.19"
  domain: myapp.example.com
  remote_dir: ~/myapp

target:
  strategy: docker_full
  host: root@87.106.87.183
  app: myapp
  version: "1.0.20"
  domain: myapp.example.com
  remote_dir: ~/myapp
  compose_files:
    - docker-compose.vps.yml
  env_file: envs/vps.env
  verify_url: https://myapp.example.com/api/v1/health
  verify_version: "1.0.20"

extra_steps:                   # optional — appended or inserted
  - id: flush_k3s_iptables     # StepLibrary name — no action needed
    insert_before: docker_build_pull   # inject before specific step
  - id: docker_prune           # StepLibrary: prune unused images
  - id: notify_slack           # custom step (needs action:)
    action: ssh_cmd
    description: "Send deploy notification"
    command: "curl -s -X POST $SLACK_WEBHOOK -d '{\"text\":\"deployed 1.0.20\"}'"
    risk: low

StepLibrary — reusable named steps

Reference any step by id alone — no action needed. Fields can be overridden:

extra_steps:
  - id: flush_k3s_iptables           # use as-is
  - id: stop_k3s
  - id: http_health_check
    url: https://myapp.example.com/health   # override url
  - id: wait_startup_long            # 60s instead of 30s
ID Action Description
flush_k3s_iptables ssh_cmd Flush CNI-HOSTPORT-DNAT + KUBE-* chains (stale k3s rules block Docker-proxy on 80/443)
delete_k3s_ingresses kubectl_delete Delete all k3s ingresses
stop_k3s systemctl_stop Stop k3s service
disable_k3s systemctl_disable Disable k3s on boot
stop_nginx systemctl_stop Stop host nginx (port 80 conflict)
restart_traefik ssh_cmd Restart Traefik container
docker_prune ssh_cmd Prune unused images + build cache
docker_compose_down docker_compose_down Stop Docker Compose stack
wait_startup wait Wait 30s
wait_startup_long wait Wait 60s
http_health_check http_check Verify health endpoint (expect: healthy)
version_check version_check Verify deployed version
sync_env scp Copy .env to remote
podman_daemon_reload systemctl_start systemctl --user daemon-reload

insert_before

By default extra steps are appended after all generated steps. Use insert_before: <step_id> to inject at a specific position:

extra_steps:
  - id: flush_k3s_iptables
    insert_before: docker_build_pull   # runs before build, not after verify

redeploy.yaml project manifest

Place in project root — redeploy run (no args) uses it automatically:

spec: migration.yaml          # default spec file
host: root@87.106.87.183
app: myapp
domain: myapp.example.com
ssh_port: 22
env_file: envs/vps.env

doql integration

redeploy is the deploy engine for doql declarative apps.

# Install with doql integration
pip install doql[deploy]

# doql build generates build/infra/migration.yaml automatically
DEPLOY_HOST=root@YOUR_VPS doql build

# Then deploy — no args needed
doql deploy              # calls redeploy API internally
doql deploy --plan-only
doql deploy --dry-run
doql quadlet --install   # installs Quadlet units via redeploy

doql DEPLOY.target → redeploy strategy mapping:

doql redeploy
docker-compose docker_full
quadlet podman_quadlet
kiosk-appliance native_kiosk
kubernetes k3s

Examples

Directory Scenario Strategy
01-vps-version-bump VPS Docker version bump docker_full → docker_full
02-k3s-to-docker Migrate off k3s k3s → docker_full
03-docker-to-podman-quadlet Move to rootless Podman docker_full → podman_quadlet
04-rpi-kiosk Raspberry Pi kiosk update native_kiosk → native_kiosk
05-iot-fleet-ota IoT fleet OTA update docker_full → docker_full
09-fleet-yaml Fleet with stages + scan fleet + redeploy target
11-traefik-tls Traefik + Let's Encrypt docker_full → podman_quadlet
12-ci-pipeline GitHub Actions / GitLab CI CI-triggered docker_full
# Run any example in dry-run mode (no SSH required):
redeploy run examples/01-vps-version-bump/migration.yaml --plan-only
redeploy run examples/04-rpi-kiosk/migration.yaml --plan-only

License

Licensed under Apache-2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

redeploy-0.1.6.tar.gz (91.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

redeploy-0.1.6-py3-none-any.whl (99.9 kB view details)

Uploaded Python 3

File details

Details for the file redeploy-0.1.6.tar.gz.

File metadata

  • Download URL: redeploy-0.1.6.tar.gz
  • Upload date:
  • Size: 91.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for redeploy-0.1.6.tar.gz
Algorithm Hash digest
SHA256 94ead437a9446e7d59804462550d3875c15b811a5b4b6a4831bc420375d22576
MD5 f7638a37a9916d225d8994fa2fea07fd
BLAKE2b-256 11e8a99de5b96172e83cdbec32fbda1f726bccab59ddb065767302d36c2b7f17

See more details on using hashes here.

File details

Details for the file redeploy-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: redeploy-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 99.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for redeploy-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 27612ee98b9cd078a5f84afed900bbfc13c4c7e696df6c7c01d18ac9f72bdf11
MD5 80001372089a1f50657c68ef9f31417b
BLAKE2b-256 e93ad030a36fc3b690c7c18ac6856329df85ddf51576fab863a46f80da697c0f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page