Skip to main content

Interactive TUI for building and deploying Proxmox VM templates using Packer, Ansible, and cloud-init

Project description

Proxmox VM Template Builder & Deployment System

Automated VM template creation and deployment for Proxmox using Packer and Ansible

Build standardized VM templates with Packer, then deploy and configure them with Ansible playbooks - all through an interactive TUI menu.

๐ŸŽฏ Key Features

  • โœ… Build Templates: 9 Linux distributions with Packer (Ubuntu, Rocky, AlmaLinux, Fedora, openSUSE)
  • โœ… Deploy VMs: Clone templates and apply Ansible configurations in one workflow
  • โœ… Batch Building: Build all templates sequentially or in parallel (configurable concurrency)
  • โœ… Network-Safe: Cloud-init networking disabled to prevent DHCP timeouts and boot delays
  • โœ… Package Profiles: Minimal, Base, Extended (configurable package sets)
  • โœ… Interactive TUI: Dialog-based menu for all operations
  • โœ… Automated Testing: Built-in validation for templates and deployments

๐Ÿ“‹ Quick Start

Prerequisites

# On Proxmox host
apt-get update
apt-get install git dialog ansible -y

# Install Packer
wget https://releases.hashicorp.com/packer/1.10.0/packer_1.10.0_linux_amd64.zip
unzip packer_1.10.0_linux_amd64.zip
mv packer /usr/local/bin/

Setup

# Clone repository
cd /root
git clone https://github.com/yourusername/linux_automation.git
cd linux_automation

# Configure Proxmox credentials
cp packer/proxmox.auto.pkrvars.hcl.example packer/proxmox.auto.pkrvars.hcl
vim packer/proxmox.auto.pkrvars.hcl

# Configure defaults
cp config/defaults.conf.example config/defaults.conf
vim config/defaults.conf  # Set cloud-init user/password

# Run interactive menu
./build-template.sh

๐Ÿš€ Main Features

1. Build Templates

Single Template:

  • Select distribution and version
  • Choose package profile (Minimal/Base/Extended)
  • Configure resources (CPU, RAM, disk)
  • Auto-downloads ISOs if needed
  • Build time: 15-30 minutes

Build All Templates (New!):

  • Builds all 9 distributions automatically
  • Parallel mode: 4 concurrent builds (configurable), ~2 hours total
  • Sequential mode: One at a time, ~3.5 hours total
  • Separate logs for each build
  • Auto-configures cloud-init credentials

Supported distributions:

  • Ubuntu 22.04, 24.04 (Server + Desktop)
  • Rocky Linux 9
  • AlmaLinux 9
  • Fedora 42 (Server + Desktop)
  • openSUSE Leap 15.6 (Server + Desktop)

Note: Debian 12 temporarily disabled (requires preseed configuration rewrite)

2. Deploy VMs from Templates (New!)

Complete deployment workflow:

  1. Select template to clone
  2. Configure VM (ID, name, CPU, RAM, disk size)
  3. Choose network mode (DHCP or static)
  4. Select clone type (linked or full)
  5. Select Ansible playbooks (multi-select)
  6. Confirm and deploy

Available Ansible Playbooks:

  • base-config - Hostname, timezone, system updates
  • users - User management and SSH keys
  • nginx - Web server
  • postgresql - Database server (auto-detects version)
  • mongodb - MongoDB database
  • docker-compose - Docker and Docker Compose
  • k3s - Single-node Kubernetes cluster (kubectl, helm included)
  • k3s-cluster - 3-Node K3S Cluster (deploys 3 VMs automatically!)
  • monitoring - Prometheus Node Exporter

Clone Types:

  • Linked clone (default): Fast, space-efficient, requires template
  • Full clone: Independent copy, slower but standalone

Deployment logs: /linux_automation/logs/deploy-*.log

3. K3S 3-Node Cluster Deployment (New!)

Special automated deployment that creates a complete Kubernetes cluster with 1 server and 2 agent nodes.

How it works:

  1. Select "k3s-cluster" from the playbook menu (mutually exclusive with "k3s" single-node)
  2. Configure base VM specs (each node gets these specs)
  3. System automatically:
    • Finds 3 consecutive available VMIDs (default: starts from 200)
    • Deploys 3 VMs from selected template (server, agent1, agent2)
    • Installs K3S server on first node
    • Retrieves cluster join token
    • Installs K3S agents on remaining nodes
    • Forms complete cluster automatically

Resource requirements:

  • 3x the configured specs (e.g., 2 cores = 6 cores total, 2GB RAM = 6GB total)
  • Storage: Depends on clone type (linked: minimal, full: 3x disk size)
  • Network: All nodes on same network, DHCP assigned IPs

Cluster configuration:

  • K3S version: stable (configurable in deploy script)
  • Server features: kubeconfig accessible, Traefik disabled by default
  • Networking: Flannel CNI (default)
  • Access: kubectl configured on server node

After deployment:

# Access the server node (first VM)
ssh admin@<server-ip>

# Verify cluster
kubectl get nodes
# Should show: server + 2 agents, all Ready

# View cluster info
kubectl cluster-info

# Check system pods
kubectl get pods -A

Access from local workstation:

# Copy kubeconfig from server node
scp admin@<server-ip>:/etc/rancher/k3s/k3s.yaml ~/.kube/config

# Update server IP in config
sed -i 's/127.0.0.1/<server-ip>/' ~/.kube/config

# Use kubectl locally
kubectl get nodes

Usage instructions: Each node has detailed instructions in /root/k3s-server-usage.txt (server) and /root/k3s-agent-usage.txt (agents).

Troubleshooting:

  • Check deployment logs: logs/deploy-k3s-cluster-*.log
  • Verify connectivity: All nodes must reach each other on port 6443
  • Agent not joining: Check token and server IP in agent logs
  • View service status: systemctl status k3s (server) or systemctl status k3s-agent (agents)
  • View logs: journalctl -u k3s -f or journalctl -u k3s-agent -f

Uninstall:

# On agents first
/usr/local/bin/k3s-agent-uninstall.sh

# Then on server
/usr/local/bin/k3s-uninstall.sh

4. Manage VMs and Templates

  • View all VMs and templates with status
  • Multi-select with SPACE bar
  • Auto-stops running VMs before destruction
  • Detailed confirmation and logging

๐Ÿ“ฆ Package Profiles

Profile Description Use Case
Minimal OS only Custom builds
Base + qemu-guest-agent, cloud-init, essential utils Most use cases
Extended + dev tools, monitoring, network debugging Development/ops servers

Extended packages include:

  • HashiCorp Tools: Terraform, Packer (from HashiCorp repository)
  • Kubernetes Tools: kubectl (from Kubernetes repo), Helm, k9s (custom binaries)
  • Cloud CLIs: AWS CLI v2, Azure CLI (RHEL-family: skipped, use pip)
  • Development: Git, Python, build-essential/gcc, pip
  • Monitoring: htop, tmux, sysstat, iotop, iftop
  • Network: tcpdump, nmap, netcat, dnsutils
  • Automation: Ansible
  • Note: Docker removed from extended profile (use docker-compose playbook instead)
  • PATH Configuration: Custom binaries in /usr/local/bin are automatically added to system PATH

๐Ÿ—‚๏ธ Repository Structure

linux_automation/
โ”œโ”€โ”€ build-template.sh          # Main script (build, deploy, manage)
โ”œโ”€โ”€ config/
โ”‚   โ”œโ”€โ”€ defaults.conf          # Cloud-init credentials, defaults
โ”‚   โ”œโ”€โ”€ packages/*.yml         # Package definitions by profile
โ”‚   โ””โ”€โ”€ distributions.yml      # ISO sources and metadata
โ”œโ”€โ”€ packer/                    # Packer templates by distro
โ”‚   โ”œโ”€โ”€ ubuntu/, debian/, rhel-family/, suse/
โ”‚   โ””โ”€โ”€ proxmox.auto.pkrvars.hcl.example
โ”œโ”€โ”€ ansible/                   # Ansible playbooks (NEW!)
โ”‚   โ”œโ”€โ”€ playbooks/             # Foundation, application, operations
โ”‚   โ””โ”€โ”€ ansible.cfg
โ”œโ”€โ”€ scripts/
โ”‚   โ”œโ”€โ”€ deploy-vm.sh           # VM deployment automation (NEW!)
โ”‚   โ””โ”€โ”€ download-iso.sh        # ISO downloads
โ””โ”€โ”€ tests/
    โ””โ”€โ”€ test-template.sh       # Template validation

๐ŸŽจ Template Features

All templates include:

  • Cloud-init ready (credentials set automatically)
  • QEMU guest agent
  • Console IP display (shows IP on login screen)
  • SSH configured
  • Network via DHCP (cloud-init networking disabled)
  • Package profile (minimal/base/extended)

Desktop templates include:

  • GUI environment (GNOME/KDE)
  • 4GB RAM, 25GB disk (auto-configured)

๐Ÿ”ง Configuration Files

config/defaults.conf

# OS-level credentials (cloud-init)
DEFAULT_CLOUDINIT_USER="admin"
DEFAULT_CLOUDINIT_PASSWORD="YourPassword123"

# Lab credentials (databases, apps, secondary users)
LAB_USER="labuser"
LAB_PASSWORD="LabPass123!"

# Resources
DEFAULT_VM_CORES=2
DEFAULT_VM_MEMORY=2048
MAX_PARALLEL_BUILDS=4

Lab Credentials Feature: For quick lab deployments, LAB_USER and LAB_PASSWORD are automatically applied to:

  • Database users (MySQL, PostgreSQL, MSSQL)
  • MSSQL SA password
  • Application accounts
  • See LAB_CREDENTIALS.md for details

โš ๏ธ Lab use only - Use unique passwords for production!

packer/proxmox.auto.pkrvars.hcl

proxmox_url  = "https://proxmox.local:8006/api2/json"
proxmox_node = "pve"
proxmox_username = "root@pam"
proxmox_password = "your-password"

๐Ÿ“ Usage Examples

Build All Templates (Parallel)

./build-template.sh
โ†’ Build All Templates
โ†’ Parallel Build

Deploy VM with Multiple Playbooks

./build-template.sh
โ†’ Deploy VM from Template
โ†’ Select ubuntu-2404-extended
โ†’ Configure VM settings
โ†’ Select clone type: linked
โ†’ Select playbooks: base-config, nginx, monitoring
โ†’ Confirm deployment

Deploy K3S 3-Node Cluster

./build-template.sh
โ†’ Deploy VM from Template
โ†’ Select ubuntu-2404-base
โ†’ Configure VM: 2 cores, 4GB RAM, 20GB disk
โ†’ Select clone type: linked
โ†’ Select playbook: k3s-cluster (only)
โ†’ Confirm deployment

# Result: 3 VMs deployed with K3S cluster ready
# Access via: ssh admin@<server-ip>
# Then run: kubectl get nodes

CLI Usage

# Deploy VM from template
./scripts/deploy-vm.sh 9200 100 "web-server" 2 2048 20G dhcp linked base-config nginx monitoring

# Template ID: 9200
# New VM ID: 100
# Name: web-server
# Resources: 2 cores, 2048MB RAM, 20GB disk
# Network: dhcp
# Clone: linked
# Playbooks: base-config, nginx, monitoring

โœ… Testing

Test Template

./tests/test-template.sh TEMPLATE_ID

# Tests:
# - Template exists and is valid
# - VM creation from template
# - QEMU agent functionality
# - Cloud-init completion
# - Cloud-init networking DISABLED โœ“
# - Package installation (by profile)
# - Network connectivity

Verify Deployment

# Check VM status
qm status VM_ID

# Get VM IP
qm agent VM_ID network-get-interfaces

# View deployment logs
tail -f logs/deploy-*.log

# View Ansible logs
tail -f logs/ansible-*.log

๐Ÿ› Troubleshooting

Build Issues

ISO download fails:

cd /var/lib/vz/template/iso
./scripts/download-iso.sh ubuntu 24.04

Packer errors:

cd packer && packer init .

Deployment Issues

No IP address:

  • Wait 30-60 seconds for DHCP
  • Check: qm agent VM_ID network-get-interfaces
  • Verify network: qm config VM_ID | grep net0

Ansible playbook fails:

  • Check SSH access: ssh admin@VM_IP
  • View logs: tail -f logs/ansible-*.log
  • Verify password in config/defaults.conf

PostgreSQL version error:

  • Fixed! Now auto-detects PostgreSQL version
  • Ubuntu 24.04 uses PostgreSQL 16
  • Ubuntu 22.04 uses PostgreSQL 14

Common Errors

Error Solution
"Template ID already exists" Choose different ID or destroy existing
"VM ID X already exists in cluster" Use suggested next available ID
"Failed to get VM IP address" Wait longer, check QEMU agent status
"No package matching 'postgresql-14'" Update playbook (fixed in latest version)

๐Ÿ”’ Security

Template Security:

  • Cloud-init credentials set per defaults.conf
  • SSH keys configured via cloud-init or Ansible playbooks
  • SELinux/AppArmor enabled by default

Deployment Security:

  • Ansible passwords stored in memory only
  • SSH password auth can be disabled via playbooks
  • Firewall configuration via Ansible playbooks

๐Ÿ“š Documentation

๐Ÿ’ก Best Practices

  1. Use parallel builds for faster template creation (4 concurrent by default)
  2. Use linked clones for fast deployment and space efficiency
  3. Select base-config playbook for all deployments (sets hostname, updates, etc.)
  4. Test templates after building: ./tests/test-template.sh TEMPLATE_ID
  5. Check logs when troubleshooting: logs/deploy-*.log and logs/ansible-*.log
  6. Regular updates: Rebuild templates monthly for security patches

๐ŸŽฏ Quick Reference

# Build all templates in parallel
./build-template.sh โ†’ Build All Templates โ†’ Parallel

# Deploy VM with configurations
./build-template.sh โ†’ Deploy VM โ†’ Select template โ†’ Configure โ†’ Select playbooks

# Deploy K3S 3-node cluster
./build-template.sh โ†’ Deploy VM โ†’ Select template โ†’ Configure โ†’ Select k3s-cluster โ†’ Confirm

# Manage VMs
./build-template.sh โ†’ Manage VMs and Templates โ†’ Multi-select โ†’ Confirm

# Test template
./tests/test-template.sh TEMPLATE_ID

# Manual deployment
./scripts/deploy-vm.sh TEMPLATE_ID NEW_VM_ID NAME CORES MEMORY DISK_SIZE NETWORK CLONE_TYPE [playbooks...]

# K3S cluster CLI deployment
./scripts/deploy-k3s-cluster.sh -t TEMPLATE_ID -n "cluster-name" -c 2 -m 4096 -d 20G -k linked -v stable

# View logs
tail -f logs/deploy-$(ls -t logs/deploy-* | head -1 | xargs basename)

Remember: Cloud-init networking is disabled in all templates to prevent boot delays!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

proxmox_template_builder-1.0.0.tar.gz (92.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

proxmox_template_builder-1.0.0-py3-none-any.whl (148.2 kB view details)

Uploaded Python 3

File details

Details for the file proxmox_template_builder-1.0.0.tar.gz.

File metadata

  • Download URL: proxmox_template_builder-1.0.0.tar.gz
  • Upload date:
  • Size: 92.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for proxmox_template_builder-1.0.0.tar.gz
Algorithm Hash digest
SHA256 126d545a60102110b1ff6443892be284101cd12820b4f4cf088d8e424880fbe4
MD5 d2a3c434e3dacf527937ae8ab1d2a67e
BLAKE2b-256 62e102cf4a16fcd4dca4232cb295f172056df21f2dc6869fc6a514d98c95cfb9

See more details on using hashes here.

File details

Details for the file proxmox_template_builder-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for proxmox_template_builder-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cf08c989be05d3e404fe299ce4e0956142bdb52fb1b84f2f94db10baaf82da63
MD5 f8c00b9e96f981473fdd97817b8b8bf5
BLAKE2b-256 7361a9b03465598f5f20afd69226358d73cf83617b772bf9344e50cfdd58591b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page