Proxmox automation toolkit — define infrastructure as YAML, provision with one command
Project description
HiveFrame
Proxmox automation toolkit — define infrastructure as YAML, provision with one command
Installation
pip install hiveframe
What it does
HiveFrame lets you define Proxmox infrastructure as YAML and provision it with one command. VLANs, VMs, firewall rules, and cloud-init config — all declarative, all version-controllable. Built for VMware refugees and MSPs who want GitOps-style workflows without enterprise tooling.
Why not Terraform? The Proxmox provider works but pulls in significant overhead for straightforward use cases. HiveFrame is the lightweight middle ground — a YAML file, a CLI, and no external state backends.
Quick start
pip install hiveframe
hiveframe config init # set up your Proxmox connection interactively
hiveframe init # scaffold a stack.yaml with commented fields
hiveframe validate # check it against the schema
hiveframe plan # preview what will be created
hiveframe apply # provision VLANs, VMs, and firewall rules
That's the full workflow. Everything else (status, destroy, status --watch) is documented below.
Commands
| Command | Description |
|---|---|
hiveframe config init |
Interactive wizard to create ~/.hiveframe/config.yaml |
hiveframe init |
Scaffold a new stack.yaml with commented fields |
hiveframe validate |
Validate stack.yaml against the schema |
hiveframe plan |
Dry-run — show what would be created or skipped |
hiveframe apply |
Provision VLANs, VMs, and firewall rules |
hiveframe status |
Show live drift between Proxmox and the stack definition |
hiveframe status --watch |
Poll continuously; fires webhook alerts on transitions |
hiveframe destroy |
Tear down all resources defined in the stack |
hiveframe web |
Start the web dashboard at http://127.0.0.1:8080 |
hiveframe webhook test -n <name> |
Send a test drift payload to a configured webhook |
All commands accept -f / --file to point at a non-default stack file.
The root group accepts --debug to print the resolved config path and diagnostics.
Stack reference
Top-level
| Field | Type | Default | Description |
|---|---|---|---|
name |
str |
— | Unique stack identifier |
description |
str |
"" |
Optional free-text description |
node |
str |
pve |
Default Proxmox node for all resources |
vlans |
list |
[] |
List of VLAN definitions |
vms |
list |
[] |
List of VM definitions |
VlanConfig
| Field | Type | Default | Description |
|---|---|---|---|
name |
str |
"" |
Friendly identifier, referenced by VMs |
vlan_id |
int |
— | 802.1Q VLAN ID (1–4094) |
cidr |
str |
— | Subnet in CIDR notation, e.g. 192.168.10.0/24 |
gateway |
str |
null |
Gateway IP — assigned to the bridge |
description |
str |
"" |
Written as a comment on the Proxmox bridge |
node |
str |
null |
Override stack-level node for this VLAN |
HiveFrame creates a dedicated Linux bridge per VLAN: vmbr<vlan_id>.
VmConfig
| Field | Type | Default | Description |
|---|---|---|---|
name |
str |
— | VM hostname and Proxmox name (RFC hostname) |
template_id |
int |
— | VMID of the template to clone |
vlan |
str |
— | name of the VLAN to attach net0 to |
cores |
int |
2 |
vCPU count |
memory_mb |
int |
2048 |
RAM in megabytes |
disk_gb |
int |
20 |
Total disk size in GB — resizes above template |
storage |
str |
local-lvm |
Proxmox storage pool for disk and cloud-init |
start_on_create |
bool |
true |
Start VM immediately after provisioning |
tags |
list |
[] |
Proxmox tags to apply |
cloud_init |
object |
see below | Cloud-init configuration block |
firewall |
list |
[] |
Per-VM firewall rules (see FirewallRule) |
node |
str |
null |
Override stack-level node for this VM |
CloudInitConfig
| Field | Type | Default | Description |
|---|---|---|---|
user |
str |
ubuntu |
Default user created by cloud-init |
password |
str |
null |
Plain-text password — prefer ssh_keys in production |
ssh_keys |
list[str] |
[] |
Authorized public keys |
ip_config |
str |
null |
dhcp, or Proxmox format: ip=192.168.1.5/24,gw=192.168.1.1 |
nameservers |
list[str] |
[] |
DNS servers |
search_domain |
str |
null |
DNS search domain |
FirewallRule
Per-VM firewall rules under the firewall: list. Rules are applied to the Proxmox VM firewall and take effect immediately on hiveframe apply.
| Field | Type | Default | Description |
|---|---|---|---|
direction |
in | out |
in |
Traffic direction |
action |
ACCEPT | DROP | REJECT |
ACCEPT |
Rule action |
protocol |
tcp | udp | icmp |
null |
Required when dport or sport is set |
source |
str |
null |
Source IP or CIDR |
dest |
str |
null |
Destination IP or CIDR |
dport |
str |
null |
Destination port or range, e.g. 22 or 80:443 |
sport |
str |
null |
Source port or range |
comment |
str |
null |
Free-text label shown in Proxmox UI |
enabled |
bool |
true |
Whether the rule is active |
firewall:
- direction: in
action: ACCEPT
protocol: tcp
dport: "22"
comment: allow SSH
- direction: in
action: DROP
comment: drop all inbound
State file
After apply, HiveFrame writes .hiveframe-state.json next to your stack.yaml. This file tracks VMIDs, VLAN bridges, and firewall rule counts — it is required for status and destroy. Add it to .gitignore if your stack contains sensitive values.
Configuration
~/.hiveframe/config.yaml is loaded automatically. Configs written before 0.1.4 using the flat proxmox_host: format are auto-migrated on first read.
Single node
nodes:
- name: pve
host: 10.0.20.1
user: root@pam
token_id: hiveframe
token_secret: your-token-secret
verify_ssl: false
Multiple nodes
Add more entries under nodes:. Each VLAN and VM can target a specific node with a node: override; the stack-level node: is the default.
nodes:
- name: pve
host: 10.0.20.1
token_id: hiveframe
token_secret: your-secret-1
- name: pve2
host: 10.0.20.2
token_id: hiveframe
token_secret: your-secret-2
Then in stack.yaml:
name: my-stack
node: pve # default
vlans:
- name: mgmt
vlan_id: 10
cidr: 192.168.10.0/24
- name: dmz
vlan_id: 50
cidr: 10.50.0.0/24
node: pve2 # this VLAN is provisioned on pve2
vms:
- name: controller
template_id: 9000
vlan: mgmt # goes on pve (stack default)
- name: worker
template_id: 9000
vlan: dmz
node: pve2 # this VM is provisioned on pve2
Webhook alerts
Webhooks fire on state transitions detected during status --watch — once when drift is first detected, once when it clears. They are never fired on every poll.
nodes:
- name: pve
host: 10.0.20.1
token_id: hiveframe
token_secret: your-secret
webhooks:
- name: slack-homelab
url: https://hooks.slack.com/services/xxx/yyy/zzz
provider: slack
on: [drift, recovery]
- name: discord-alerts
url: https://discord.com/api/webhooks/xxx/yyy
provider: discord
on: [drift]
- name: my-system
url: https://my-system.internal/hooks/hiveframe
provider: generic
on: [drift, recovery]
Supported providers: slack, discord, generic.
Supported events: drift (healthy → drifted), recovery (drifted → healthy).
Failed webhooks are retried once after 5 seconds. A warning is printed to the terminal if both attempts fail; the watch loop continues regardless.
Test a webhook without waiting for real drift:
hiveframe webhook test --name slack-homelab
Environment variables (single-node fallback)
When no config file exists, HiveFrame reads these environment variables:
| Variable | Description |
|---|---|
HIVEFRAME_PROXMOX_HOST |
IP or hostname, no scheme |
HIVEFRAME_PROXMOX_USER |
Default: root@pam |
HIVEFRAME_PROXMOX_TOKEN_ID |
API token name |
HIVEFRAME_PROXMOX_TOKEN_SECRET |
API token secret |
HIVEFRAME_PROXMOX_VERIFY_SSL |
Default: false |
Web dashboard
hiveframe web # http://127.0.0.1:8080
hiveframe web --port 9090 # custom port
hiveframe web --host 0.0.0.0 # bind all interfaces
The dashboard shows live VLAN and VM status for the first configured node.
Requirements
- Python 3.11+
- Proxmox VE 7.x, 8.x, or 9.x
- API token with sufficient Proxmox permissions (PVEVMAdmin or equivalent)
- Template VM with cloud-init drive (
ide2) for VM provisioning
Development
git clone https://codeberg.org/itzdrixxyy/hiveframe.git
cd hiveframe
pip install -e ".[dev]"
pytest
Copy stack.yaml.example to stack.yaml and fill in your values before running apply.
Roadmap
Built:
- VLAN and VM provisioning with idempotent apply/destroy
- Firewall rule provisioning (per-VM)
- Drift detection with
status --watch - Webhook alerts (Slack, Discord, generic) on drift/recovery transitions
- Multi-node support — per-resource
node:override, lazy client connections - State file tracking for idempotent operations
- Web dashboard (FastAPI + HTMX)
Next:
- Tag-based filtering for plan/apply/destroy
- Scheduled drift checks (cron mode)
- Proxmox privilege separation (non-root API tokens)
License
MIT — itzdrixxyy
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hiveframe-0.1.6.tar.gz.
File metadata
- Download URL: hiveframe-0.1.6.tar.gz
- Upload date:
- Size: 41.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
594c0badf70a3cd364032a7a286cf49daffe8974864359e4f9d5cd25c0688308
|
|
| MD5 |
4e02444619ee4cc9d8a1158044ce2970
|
|
| BLAKE2b-256 |
072984795cfb86631db8dab917da13c67a788647c79c3fe27b0bb5bdf5a5d2c3
|
File details
Details for the file hiveframe-0.1.6-py3-none-any.whl.
File metadata
- Download URL: hiveframe-0.1.6-py3-none-any.whl
- Upload date:
- Size: 35.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0f290b919d3f09e63ee86914256b840a3bd32ef0ff32499470cbc6178b176b5d
|
|
| MD5 |
ece75e1e22b26c99efb057136b16bc15
|
|
| BLAKE2b-256 |
9e26cb8d053f72f3a65a53a4cc56ecfa6ceffa37ac8f6ba54fc31d698ae9ffc4
|