Proxmox VE control
Project description
Proxmox VE Control
pvecontrol (https://pypi.org/project/pvecontrol/) is a CLI tool to manage Proxmox VE clusters and perform intermediate and advanced tasks that aren't available (or aren't straightforward) in the Proxmox web UI or default CLI tools.
It was written by (and for) teams managing multiple Proxmox clusters, sometimes with many hypervisors. Conversely, if your Proxmox install consists of a single cluster with a single node, the features of pvecontrol might not be very interesting for you!
Here are a few examples of things you can do with pvecontrol:
- List all VMs across all hypervisors, along with their state and size;
- Evacuate (=drain) a hypervisor, i.e. migrate all VMs that are running on that hypervisor, automatically picking nodes with enough capacity to host these VMs.
- Run sanity checks on a cluster. Sanity checks are sets of tests designed to verify the integrity of the cluster.
To communicate with Proxmox VE, pvecontrol uses proxmoxer, a wonderful library that enables communication with various Proxmox APIs.
Installation
pvecontrol requires Python version 3.9 or above.
The easiest way to install it is simply using pip. New versions are automatically published to pypi repository. It is recommended to use pipx in order to automatically create a dedicated python virtual environment:
pipx install pvecontrol
Configuration
To use pvecontrol, you must create a YAML configuration in $HOME/.config/pvecontrol/config.yaml. That file will list your clusters and how to authenticate with them.
pvecontrol only uses the Proxmox HTTP API, which means that you can use most Proxmox authentication mechanisms, including @pve realm users and tokens.
HTTPS certificate verification is disabled by default, but can be enabled using the ssl_verify boolean.
As an example, here's how to set up a dedicated user for pvecontrol, with read-only access to the Proxmox API:
pveum user add pvecontrol@pve --password my.password.is.weak
pveum acl modify / --roles PVEAuditor --users pvecontrol@pve
You can then create the following configuration file in $HOME/.config/pvecontrol/config.yaml:
clusters:
- name: fr-par-1
host: localhost
user: pvecontrol@pve
password: my.password.is.weak
ssl_verify: true
And see pvecontrol in action right away:
pvecontrol -c fr-par-1 vm list
If you plan to use pvecontrol to move VMs around, you should grant it PVEVMAdmin permissions:
pveum acl modify / --roles PVEVMAdmin --users pvecontrol@pve
API tokens
pvecontrol also supports authentication with API tokens. A Proxmox API token is associated to an individual user, and can be given separate permissions and expiration dates. You can learn more about Proxmox tokens in this section of the Proxmox documentation.
As an example, to create a new API token associated to the pvecontrol@pve user and inherit all its permissions, you can use the following command:
pveum user token add pvecontrol@pve mytoken --privsep 0
Then, retrieve the token value, and add it to the configuration file to use it to authenticate:
clusters:
- name: fr-par-1
host: localhost
user: pvecontrol@pve
token_name: mytoken
token_value: randomtokenvalue
Reverse proxies
pvecontrol supports certificate-based authentication to a reverse proxy. Which makes it suitable for use with tools like teleport using teleport apps.
clusters:
- name: fr-par-1
host: localhost
user: pvecontrol@pve
password: my.password.is.weak
proxy_certificate_path: /tmp/proxmox-reverse-proxy.pem
proxy_certificate_key_path: /tmp/proxmox-reverse-proxy
You can also use command substitution syntax and the key proxy_certificate to execute a command that will output a JSON document containing the certificate and key paths.
clusters:
- name: fr-par-1
host: localhost
user: pvecontrol@pve
password: my.password.is.weak
proxy_certificate: $(my_custom_command login proxmox-fr-par-1)
It should output something like this:
{
"cert": "/tmp/proxmox-reverse-proxy.pem",
"key": "/tmp/proxmox-reverse-proxy",
"anything_else": "it is ok to have other fields, they will be ignored. this is to support existing commands"
}
CAUTION: environment variable and ~ expansion and are not supported.
Better security
Instead of specifying users, passwords and certificates paths in plain text in the configuration file, you can use the shell command substitution syntax $(...) inside the user, password, proxy_certificate fields; for instance:
clusters:
- name: prod-cluster-1
host: 10.10.10.10
user: pvecontrol@pve
ssl_verify: true
password: $(command to get -password)
Worse security
You can use @pam users (and even root@pam) and passwords in the pvecontrol YAML configuration file; but you probably should not, as anyone with read access to the configuration file would then automatically gain shell access to your Proxmox hypervisor. Not recommended in production!
Advanced configuration options
The configuration file can include a node: section to specify CPU and memory policies. These will be used when scheduling a VM (i.e. determine on which node it should run), specifically when draining a node for maintenance.
There are currently two parameters: cpufactor and memoryminimum.
cpufactor indicates the level of overcommit allowed on a hypervisor. 1 means no overcommit at all; 5 means "a hypervisor with 8 cores can run VMs with up to 5x8 = 40 cores in total".
memoryminimum is the amount of memory that should always be available on a node, in bytes. When scheduling a VM (for instance, when automatically moving VMs around), pvecontrol will make sure that this amount of memory remains available for the hypervisor OS itself. Caution: if that amount is set to zero, it will be possible to allocate the entire host memory to virtual machines, leaving no memory for the hypervisor operating system and management daemons!
These options can be specified in a global node: section, and then overridden per cluster.
Here is a configuration file showing this in action:
---
node:
# Overcommit CPU factor
# 1 = no overcommit
cpufactor: 2.5
# Memory to reserve for the system on a node (in bytes)
memoryminimum: 8589934592
clusters:
- name: my-test-cluster
host: 192.168.1.10
user: pvecontrol@pve
password: superpasssecret
# Override global values for this cluster
node:
cpufactor: 1
- name: prod-cluster-1
host: 10.10.10.10
user: pvecontrol@pve
password: Supers3cUre
- name: prod-cluster-2
host: 10.10.10.10
user: $(command to get -user)
password: $(command to get -password)
- name: prod-cluster-3
host: 10.10.10.10
user: morticia@pve
token_name: pvecontrol
token_value: 12345678-abcd-abcd-abcd-1234567890ab
Usage
Here is a quick overview of pvecontrol commands and options, it may evolve over time:
$ pvecontrol --help
Usage: pvecontrol [OPTIONS] COMMAND [ARGS]...
Proxmox VE control CLI, version: x.y.z
Options:
-d, --debug
-o, --output [text|json|csv|yaml|md]
[default: text]
-c, --cluster NAME Proxmox cluster name as defined in
configuration [required]
--unicode / --no-unicode Use unicode characters for output
--color / --no-color Use colorized output
--help Show this message and exit.
Commands:
node evacuate Evacuate a node by migrating all it's VM out to one or...
node list List nodes in the cluster
sanitycheck Check status of proxmox Cluster
status Show cluster status
storage list List storages in the cluster
task get Get detailled information about a task
task list List tasks in the cluster
vm list List VMs in the cluster
vm migrate Migrate VMs in the cluster
Made with love by Enix.io
pvecontrol works with subcommands for each operation. Operation related to a specific kind of object (tasks for instance) will be grouped into their own subcommand group. Each subcommand has its own help:
$ pvecontrol task get --help
Usage: pvecontrol task get [OPTIONS] UPID
Options:
-f, --follow Wait task end
-w, --wait Follow task log output
--help Show this message and exit.
The -c or --cluster flag is required in order to indicate on which cluster we want to work.
The simplest operation we can do to check that pvecontrol works correctly, and that authentication has been configured properly is status:
$ pvecontrol --cluster my-test-cluster status
INFO:root:Proxmox cluster: my-test-cluster
Status: healthy
VMs: 0
Templates: 0
Metrics:
CPU: 0.00/64(0.0%), allocated: 0
Memory: 0.00 GiB/128.00 GiB(0.0%), allocated: 0.00 GiB
Disk: 0.00 GiB/2.66 TiB(0.0%)
Nodes:
Offline: 0
Online: 3
Unknown: 0
If this works, we're good to go!
Environment variables
pvecontrol supports the following environment variables:
PVECONTROL_CLUSTER: the default cluster to use when no-cor--clusteroption is specified.PVECONTROL_COLOR: if set toFalse, it will disable all colorized output.PVECONTROL_UNICODE: if set toFalse, it will disable all unicode output.
Shell completion
pvecontrol provides a completion helper to generate completion configuration for common shells. It currently supports bash, tcsh, and zsh.
You can adapt the following commands to your environment:
# bash
_PVECONTROL_COMPLETE=bash_source pvecontrol > "${BASH_COMPLETION_USER_DIR:-${XDG_DATA_HOME:-$HOME/.local/share}/bash-completion}/completions/pvecontrol"
# zsh
_PVECONTROL_COMPLETE=zsh_source pvecontrol > "${HOME}/.zsh/completions/_pvecontrol"
# fish
_PVECONTROL_COMPLETE=fish_source pvecontrol > {$HOME}/.config/fish/completions/pvecontrol.fish
Development
If you want to tinker with the code, all the required dependencies are listed in requirements.txt, and you can install them e.g. with pip:
pip3 install -r requirements.txt -e .
Then you can run the script directly like so:
pvecontrol -h
Contributing
This project use semantic versioning with the python-semantic-release toolkit to automate the release process. All commits must follow the Angular Commit Message Conventions. Repository main branch is also protected to prevent accidental releases. All updates must go through a PR with a review.
Made with :heart: by Enix (http://enix.io) :monkey: from Paris :fr:.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pvecontrol-0.6.1.tar.gz.
File metadata
- Download URL: pvecontrol-0.6.1.tar.gz
- Upload date:
- Size: 33.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8378a5bdadaf25012758e9cde23a3e92ac6c0c2c7a2078bed6d8b705098b0254
|
|
| MD5 |
cad13d1c4801e73425ad6ab165b2d374
|
|
| BLAKE2b-256 |
0f164054cd490e655a94fe445cbf9aca06e550c58235f4e5bda3c6aaafa2ee42
|
Provenance
The following attestation bundles were made for pvecontrol-0.6.1.tar.gz:
Publisher:
release.yml on enix/pvecontrol
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pvecontrol-0.6.1.tar.gz -
Subject digest:
8378a5bdadaf25012758e9cde23a3e92ac6c0c2c7a2078bed6d8b705098b0254 - Sigstore transparency entry: 229554327
- Sigstore integration time:
-
Permalink:
enix/pvecontrol@da11a91718f60b8e611de7618810eb9beea5a36c -
Branch / Tag:
refs/heads/main - Owner: https://github.com/enix
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@da11a91718f60b8e611de7618810eb9beea5a36c -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file pvecontrol-0.6.1-py3-none-any.whl.
File metadata
- Download URL: pvecontrol-0.6.1-py3-none-any.whl
- Upload date:
- Size: 38.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
356ab5c0f45fe837e67e7027ef15c5bae285ea894a4aa80742886ef11432a398
|
|
| MD5 |
ec969a9bdfddbaee59dc1bedaca55137
|
|
| BLAKE2b-256 |
75160a437d6b4e2478dd86ea7efcfe1585781e949f1f33511651cc33511bfa31
|
Provenance
The following attestation bundles were made for pvecontrol-0.6.1-py3-none-any.whl:
Publisher:
release.yml on enix/pvecontrol
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pvecontrol-0.6.1-py3-none-any.whl -
Subject digest:
356ab5c0f45fe837e67e7027ef15c5bae285ea894a4aa80742886ef11432a398 - Sigstore transparency entry: 229554336
- Sigstore integration time:
-
Permalink:
enix/pvecontrol@da11a91718f60b8e611de7618810eb9beea5a36c -
Branch / Tag:
refs/heads/main - Owner: https://github.com/enix
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@da11a91718f60b8e611de7618810eb9beea5a36c -
Trigger Event:
workflow_dispatch
-
Statement type: