Interactive TUI for building and deploying Proxmox VM templates using Packer, Ansible, and cloud-init
Project description
Proxmox VM Template Builder & Deployment System
Automated VM template creation and deployment for Proxmox using Packer and Ansible
Build standardized VM templates with Packer, then deploy and configure them with Ansible playbooks - all through an interactive TUI menu.
๐ฏ Key Features
- โ Build Templates: 9 Linux distributions with Packer (Ubuntu, Rocky, AlmaLinux, Fedora, openSUSE)
- โ Deploy VMs: Clone templates and apply Ansible configurations in one workflow
- โ Batch Building: Build all templates sequentially or in parallel (configurable concurrency)
- โ Network-Safe: Cloud-init networking disabled to prevent DHCP timeouts and boot delays
- โ Package Profiles: Minimal, Base, Extended (configurable package sets)
- โ Interactive TUI: Dialog-based menu for all operations
- โ Automated Testing: Built-in validation for templates and deployments
๐ Quick Start
Install from PyPI (Recommended)
pip install proxmox-template-builder
# Launch the TUI
ptb
# or
proxmox-template-builder
Custom Configuration
Override bundled defaults without modifying the installed package:
# Create your config directory
mkdir -p ~/.config/proxmox-template-builder/packages
# Override default settings (only the keys you want to change)
cat > ~/.config/proxmox-template-builder/defaults.conf <<EOF
DEFAULT_VM_CORES=4
DEFAULT_VM_MEMORY=4096
EOF
# Or point to a custom config directory
ptb --config-dir /path/to/your/config
# Or via environment variable
export PROXMOX_BUILDER_CONFIG_DIR=/path/to/your/config
Config override lookup order (highest priority first):
--config-dirCLI argument$PROXMOX_BUILDER_CONFIG_DIRenvironment variable~/.config/proxmox-template-builder/- Bundled defaults (always loaded as base)
Override files (create only what you need):
| File | Merge Strategy |
|---|---|
defaults.conf |
Key-level override โ your keys replace bundled keys |
distributions.yml |
Deep merge โ add distros/versions without replacing existing |
packages/base-packages.yml |
Top-level key replacement โ your apt_packages list replaces the bundled one |
packages/extended-packages.yml |
Same as base-packages |
Prerequisites
# On Proxmox host
apt-get update
apt-get install git dialog ansible python3-pip -y
# Install Packer
wget https://releases.hashicorp.com/packer/1.10.0/packer_1.10.0_linux_amd64.zip
unzip packer_1.10.0_linux_amd64.zip
mv packer /usr/local/bin/
Setup (from Git)
# Clone repository
cd /root
git clone https://github.com/yourusername/linux_automation.git
cd linux_automation
# Configure Proxmox credentials
cp packer/proxmox.auto.pkrvars.hcl.example packer/proxmox.auto.pkrvars.hcl
vim packer/proxmox.auto.pkrvars.hcl
# Configure defaults
cp config/defaults.conf.example config/defaults.conf
vim config/defaults.conf # Set cloud-init user/password
# Run interactive TUI
pip install .
ptb
# Or use the legacy bash menu
./build-template.sh
๐ Main Features
1. Build Templates
Single Template:
- Select distribution and version
- Choose package profile (Minimal/Base/Extended)
- Configure resources (CPU, RAM, disk)
- Auto-downloads ISOs if needed
- Build time: 15-30 minutes
Build All Templates (New!):
- Builds all 9 distributions automatically
- Parallel mode: 4 concurrent builds (configurable), ~2 hours total
- Sequential mode: One at a time, ~3.5 hours total
- Separate logs for each build
- Auto-configures cloud-init credentials
Supported distributions:
- Ubuntu 22.04, 24.04 (Server + Desktop)
- Rocky Linux 9
- AlmaLinux 9
- Fedora 42 (Server + Desktop)
- openSUSE Leap 15.6 (Server + Desktop)
Note: Debian 12 temporarily disabled (requires preseed configuration rewrite)
2. Deploy VMs from Templates (New!)
Complete deployment workflow:
- Select template to clone
- Configure VM (ID, name, CPU, RAM, disk size)
- Choose network mode (DHCP or static)
- Select clone type (linked or full)
- Select Ansible playbooks (multi-select)
- Confirm and deploy
Available Ansible Playbooks:
base-config- Hostname, timezone, system updatesusers- User management and SSH keysnginx- Web serverpostgresql- Database server (auto-detects version)mongodb- MongoDB databasedocker-compose- Docker and Docker Composek3s- Single-node Kubernetes cluster (kubectl, helm included)k3s-cluster- 3-Node K3S Cluster (deploys 3 VMs automatically!)monitoring- Prometheus Node Exporter
Clone Types:
- Linked clone (default): Fast, space-efficient, requires template
- Full clone: Independent copy, slower but standalone
Deployment logs: /linux_automation/logs/deploy-*.log
3. K3S 3-Node Cluster Deployment (New!)
Special automated deployment that creates a complete Kubernetes cluster with 1 server and 2 agent nodes.
How it works:
- Select "k3s-cluster" from the playbook menu (mutually exclusive with "k3s" single-node)
- Configure base VM specs (each node gets these specs)
- System automatically:
- Finds 3 consecutive available VMIDs (default: starts from 200)
- Deploys 3 VMs from selected template (server, agent1, agent2)
- Installs K3S server on first node
- Retrieves cluster join token
- Installs K3S agents on remaining nodes
- Forms complete cluster automatically
Resource requirements:
- 3x the configured specs (e.g., 2 cores = 6 cores total, 2GB RAM = 6GB total)
- Storage: Depends on clone type (linked: minimal, full: 3x disk size)
- Network: All nodes on same network, DHCP assigned IPs
Cluster configuration:
- K3S version: stable (configurable in deploy script)
- Server features: kubeconfig accessible, Traefik disabled by default
- Networking: Flannel CNI (default)
- Access: kubectl configured on server node
After deployment:
# Access the server node (first VM)
ssh admin@<server-ip>
# Verify cluster
kubectl get nodes
# Should show: server + 2 agents, all Ready
# View cluster info
kubectl cluster-info
# Check system pods
kubectl get pods -A
Access from local workstation:
# Copy kubeconfig from server node
scp admin@<server-ip>:/etc/rancher/k3s/k3s.yaml ~/.kube/config
# Update server IP in config
sed -i 's/127.0.0.1/<server-ip>/' ~/.kube/config
# Use kubectl locally
kubectl get nodes
Usage instructions: Each node has detailed instructions in /root/k3s-server-usage.txt (server) and /root/k3s-agent-usage.txt (agents).
Troubleshooting:
- Check deployment logs:
logs/deploy-k3s-cluster-*.log - Verify connectivity: All nodes must reach each other on port 6443
- Agent not joining: Check token and server IP in agent logs
- View service status:
systemctl status k3s(server) orsystemctl status k3s-agent(agents) - View logs:
journalctl -u k3s -forjournalctl -u k3s-agent -f
Uninstall:
# On agents first
/usr/local/bin/k3s-agent-uninstall.sh
# Then on server
/usr/local/bin/k3s-uninstall.sh
4. Manage VMs and Templates
- View all VMs and templates with status
- Multi-select with SPACE bar
- Auto-stops running VMs before destruction
- Detailed confirmation and logging
๐ฆ Package Profiles
| Profile | Description | Use Case |
|---|---|---|
| Minimal | OS only | Custom builds |
| Base | + qemu-guest-agent, cloud-init, essential utils | Most use cases |
| Extended | + dev tools, monitoring, network debugging | Development/ops servers |
Extended packages include:
- HashiCorp Tools: Terraform, Packer (from HashiCorp repository)
- Kubernetes Tools: kubectl (from Kubernetes repo), Helm, k9s (custom binaries)
- Cloud CLIs: AWS CLI v2, Azure CLI (RHEL-family: skipped, use pip)
- Development: Git, Python, build-essential/gcc, pip
- Monitoring: htop, tmux, sysstat, iotop, iftop
- Network: tcpdump, nmap, netcat, dnsutils
- Automation: Ansible
- Note: Docker removed from extended profile (use docker-compose playbook instead)
- PATH Configuration: Custom binaries in
/usr/local/binare automatically added to system PATH
๐๏ธ Repository Structure
linux_automation/
โโโ build-template.sh # Legacy bash TUI (fallback)
โโโ tui/ # Python Textual TUI (pip-installable)
โ โโโ app.py # Main Textual App
โ โโโ screens/ # Wizard screens (build, deploy, manage)
โ โโโ services/ # Proxmox API wrapper, script runner
โ โโโ config/loader.py # Config loading with external override support
โ โโโ data/ # Bundled data files (package_data)
โ โโโ config/ # defaults, distributions, package profiles
โ โโโ scripts/ # deploy-vm.sh, deploy-k3s-cluster.sh
โ โโโ packer/ # HCL2 templates by distro
โ โโโ ansible/ # Ansible playbooks and config
โ โโโ cloud-init/ # Cloud-init templates
โโโ config -> tui/data/config # Symlinks for backward compatibility
โโโ scripts -> tui/data/scripts
โโโ packer -> tui/data/packer
โโโ ansible -> tui/data/ansible
โโโ tests -> tui/data/tests
๐จ Template Features
All templates include:
- Cloud-init ready (credentials set automatically)
- QEMU guest agent
- Console IP display (shows IP on login screen)
- SSH configured
- Network via DHCP (cloud-init networking disabled)
- Package profile (minimal/base/extended)
Desktop templates include:
- GUI environment (GNOME/KDE)
- 4GB RAM, 25GB disk (auto-configured)
๐ง Configuration Files
config/defaults.conf
# OS-level credentials (cloud-init)
DEFAULT_CLOUDINIT_USER="admin"
DEFAULT_CLOUDINIT_PASSWORD="" # REQUIRED: Set a strong password
# Lab credentials (databases, apps, secondary users)
LAB_USER="labuser"
LAB_PASSWORD="" # REQUIRED: Set a strong password
# Resources
DEFAULT_VM_CORES=2
DEFAULT_VM_MEMORY=2048
Lab Credentials Feature:
For quick lab deployments, LAB_USER and LAB_PASSWORD are automatically applied to:
- Database users (MySQL, PostgreSQL, MSSQL)
- MSSQL SA password
- Application accounts
- See LAB_CREDENTIALS.md for details
โ ๏ธ Lab use only - Use unique passwords for production!
packer/proxmox.auto.pkrvars.hcl
proxmox_url = "https://proxmox.local:8006/api2/json"
proxmox_node = "pve"
proxmox_username = "root@pam"
proxmox_password = "your-password"
๐ Usage Examples
Build All Templates (Parallel)
./build-template.sh
โ Build All Templates
โ Parallel Build
Deploy VM with Multiple Playbooks
./build-template.sh
โ Deploy VM from Template
โ Select ubuntu-2404-extended
โ Configure VM settings
โ Select clone type: linked
โ Select playbooks: base-config, nginx, monitoring
โ Confirm deployment
Deploy K3S 3-Node Cluster
./build-template.sh
โ Deploy VM from Template
โ Select ubuntu-2404-base
โ Configure VM: 2 cores, 4GB RAM, 20GB disk
โ Select clone type: linked
โ Select playbook: k3s-cluster (only)
โ Confirm deployment
# Result: 3 VMs deployed with K3S cluster ready
# Access via: ssh admin@<server-ip>
# Then run: kubectl get nodes
CLI Usage
# Deploy VM from template
./scripts/deploy-vm.sh 9200 100 "web-server" 2 2048 20G dhcp linked base-config nginx monitoring
# Template ID: 9200
# New VM ID: 100
# Name: web-server
# Resources: 2 cores, 2048MB RAM, 20GB disk
# Network: dhcp
# Clone: linked
# Playbooks: base-config, nginx, monitoring
โ Testing
Test Template
./tests/test-template.sh TEMPLATE_ID
# Tests:
# - Template exists and is valid
# - VM creation from template
# - QEMU agent functionality
# - Cloud-init completion
# - Cloud-init networking DISABLED โ
# - Package installation (by profile)
# - Network connectivity
Verify Deployment
# Check VM status
qm status VM_ID
# Get VM IP
qm agent VM_ID network-get-interfaces
# View deployment logs
tail -f logs/deploy-*.log
# View Ansible logs
tail -f logs/ansible-*.log
๐ Troubleshooting
Build Issues
ISO download fails:
cd /var/lib/vz/template/iso
./scripts/download-iso.sh ubuntu 24.04
Packer errors:
cd packer && packer init .
Deployment Issues
No IP address:
- Wait 30-60 seconds for DHCP
- Check:
qm agent VM_ID network-get-interfaces - Verify network:
qm config VM_ID | grep net0
Ansible playbook fails:
- Check SSH access:
ssh admin@VM_IP - View logs:
tail -f logs/ansible-*.log - Verify password in
config/defaults.conf
PostgreSQL version error:
- Fixed! Now auto-detects PostgreSQL version
- Ubuntu 24.04 uses PostgreSQL 16
- Ubuntu 22.04 uses PostgreSQL 14
Common Errors
| Error | Solution |
|---|---|
| "Template ID already exists" | Choose different ID or destroy existing |
| "VM ID X already exists in cluster" | Use suggested next available ID |
| "Failed to get VM IP address" | Wait longer, check QEMU agent status |
| "No package matching 'postgresql-14'" | Update playbook (fixed in latest version) |
๐ Security
Template Security:
- Cloud-init credentials set per
defaults.conf - SSH keys configured via cloud-init or Ansible playbooks
- SELinux/AppArmor enabled by default
Deployment Security:
- Ansible passwords stored in memory only
- SSH password auth can be disabled via playbooks
- Firewall configuration via Ansible playbooks
๐ Documentation
- ansible/README.md - Ansible playbook details and customization
- NETWORK-CONFIG-GUIDE.md - Network troubleshooting
- ARCHITECTURE.md - Technical design decisions
๐ก Best Practices
- Use parallel builds for faster template creation (4 concurrent by default)
- Use linked clones for fast deployment and space efficiency
- Select base-config playbook for all deployments (sets hostname, updates, etc.)
- Test templates after building:
./tests/test-template.sh TEMPLATE_ID - Check logs when troubleshooting:
logs/deploy-*.logandlogs/ansible-*.log - Regular updates: Rebuild templates monthly for security patches
๐ฏ Quick Reference
# Install from PyPI
pip install proxmox-template-builder
# Launch TUI (short alias)
ptb
# Launch with custom config
ptb --config-dir /path/to/config
# Build all templates in parallel
ptb โ Build All Templates โ Parallel
# Deploy VM with configurations
ptb โ Deploy VM โ Select template โ Configure โ Select playbooks
# Deploy K3S 3-node cluster
ptb โ Deploy VM โ Select template โ Configure โ Select k3s-cluster โ Confirm
# Manage VMs
ptb โ Manage VMs and Templates โ Multi-select โ Confirm
# Legacy bash menu (also still works)
./build-template.sh
# Test template
./tests/test-template.sh TEMPLATE_ID
# Manual deployment
./scripts/deploy-vm.sh TEMPLATE_ID NEW_VM_ID NAME CORES MEMORY DISK_SIZE NETWORK CLONE_TYPE [playbooks...]
# K3S cluster CLI deployment
./scripts/deploy-k3s-cluster.sh -t TEMPLATE_ID -n "cluster-name" -c 2 -m 4096 -d 20G -k linked -v stable
# View logs
tail -f logs/deploy-$(ls -t logs/deploy-* | head -1 | xargs basename)
Remember: Cloud-init networking is disabled in all templates to prevent boot delays!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file proxmox_template_builder-1.0.20.tar.gz.
File metadata
- Download URL: proxmox_template_builder-1.0.20.tar.gz
- Upload date:
- Size: 108.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
00811a44d59828003a7130b9de2781dd5e70f5ba1a2171ca784cd10cc830f8a1
|
|
| MD5 |
92d6f67eeeac7f3a33dc87aef81e3b85
|
|
| BLAKE2b-256 |
0bb25a6597771fd673da69bc7aebf08f58a19b9e3d1352b8540b3a1b2ae390e6
|
File details
Details for the file proxmox_template_builder-1.0.20-py3-none-any.whl.
File metadata
- Download URL: proxmox_template_builder-1.0.20-py3-none-any.whl
- Upload date:
- Size: 167.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d922fd4d2780dd2d456d7822c9a60c1379f8fa8668bc1bc1fdb528af12ef17aa
|
|
| MD5 |
5aea9f5a96b6469c98da16a0819ddd51
|
|
| BLAKE2b-256 |
6d73e7191c5c8394301a5b85345a18bb2d5a23b554b69f1691ad4f3be55ced23
|