Skip to main content

Migrate virtual machines between different Proxmox VM clusters

Project description

Migrate VMs between different Proxmox VE clusters.

Migrating a virtual machine (VM) on a PVE-cluster from one node to another is implemented in the Proxmox Virtual Environment (PVE). But migrating a VM from one PVE-cluster to another is not.

proxmove helps you move VMs between PVE-clusters with minimal hassle.

usage: proxmove [-h] [-c FILENAME] [-n] [--bwlimit MBPS] [--skip-disks]
                [--skip-start] [--version]
                source destination nodeid storage vm [vm ...]

Migrate VMs from one Proxmox cluster to another.

positional arguments:
  source                alias of source cluster
  destination           alias of destination cluster
  nodeid                node on destination cluster
  storage               storage on destination node
  vm                    one or more VMs (guests) to move

optional arguments:
  -h, --help            show this help message and exit
  -c FILENAME, --config FILENAME
                        use alternate configuration inifile
  -n, --dry-run         stop before doing any writes
  --bwlimit MBPS        limit bandwidth in Mbit/s
  --skip-disks          do the move, but skip copying of the disks; implies
                        --skip-start
  --skip-start          do the move, but do not start the new instance
  --version             show program's version number and exit

Cluster aliases and storage locations should be defined in ~/.proxmoverc (or
see -c option). See the example proxmoverc.sample. It requires
[pve:CLUSTER_ALIAS] sections for the proxmox "api" URL and
[storage:CLUSTER_ALIAS:STORAGE_NAME] sections with "ssh", "path" and "temp"
settings.

Example run

First you need to configure ~/.proxmoverc; see below.

When configured, you can do something like this:

$ proxmove banana-cluster the-new-cluster node2 node2-ssd the-vm-to-move
12:12:27: Attempt moving banana-cluster<e1400248> => the-new-cluster<6669ad2c> (node 'node2'): the-vm-to-move
12:12:27: - source VM the-vm-to-move@node1<qemu/565/running>
12:12:27: - storage 'ide2': None,media=cdrom (host=<unknown>, guest=<unknown>)
12:12:27: - storage 'virtio0': sharedsan:565/vm-565-disk-1.qcow2,format=qcow2,iops_rd=4000,iops_wr=500,size=50G (host=37.7GiB, guest=50.0GiB)
12:12:27: Creating new VM 'the-vm-to-move' on 'the-new-cluster', node 'node2'
12:12:27: - created new VM 'the-vm-to-move--CREATING' as UPID:node2:00005977:1F4D78F4:57C55C0B:qmcreate:126:user@pve:; waiting for it to show up
12:12:34: - created new VM 'the-vm-to-move--CREATING': the-vm-to-move--CREATING@node2<qemu/126/stopped>
12:12:34: Stopping VM the-vm-to-move@node1<qemu/565/running>
12:12:42: - stopped VM the-vm-to-move@node1<qemu/565/stopped>
12:12:42: Ejected (cdrom?) volume 'ide2' (none) added to the-vm-to-move--CREATING@node2<qemu/126/stopped>
12:12:42: Begin copy of 'virtio0' (sharedsan:565/vm-565-disk-1.qcow2,format=qcow2,iops_rd=4000,iops_wr=500,size=50G) to local-ssd
12:12:42: scp(1) copy from '/pool0/san/images/565/vm-565-disk-1.qcow2' (on sharedsan) to 'root@node2.the-new-cluster.com:/node2-ssd/temp/temp-proxmove/vm-126-virtio0'
Warning: Permanently added 'node2.the-new-cluster.com' (ECDSA) to the list of known hosts.
vm-565-disk-1.qcow2   100%   50GB   90.5MB/s   09:26
Connection to san.banana-cluster.com closed.
12:22:08: Temp data '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' on local-ssd
12:22:08: Writing data from temp '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' to '/dev/zvol/node2-ssd/vm-126-virtio0' (on local-ssd)
    (100.00/100%)
Connection to node2.the-new-cluster.com closed.
12:24:25: Removing temp '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' (on local-ssd)
12:24:26: Starting VM the-vm-to-move@node2<qemu/126/stopped>
12:24:27: - started VM the-vm-to-move@node2<qemu/126/running>
12:24:27: Completed moving banana-cluster<e1400248> => the-new-cluster<6669ad2c> (node 'node2'): the-vm-to-move

Before, the-vm-to-move was running on banana-cluster on node1.

Afterwards, the-vm-to-move is running on the-new-cluster on node2. The the-vm-to-move on the banana-cluster has been stopped and renamed to the-vm-to-move--MIGRATED.

Configuration

Set up the ~/.proxmoverc config file. First you need to define which clusters you have. For example banana-cluster and the-new-cluster.

; Example cluster named "banana-cluster" with 3 storage devices, one
; shared, and two which exist on a single node only.
[pve:banana-cluster]
api=https://user@pve:PASSWORD@banana-cluster.com:443

; Example cluster named "the-new-cluster" with 2 storage devices; both
; storage devices exist on the respective nodes only.
[pve:the-new-cluster]
api=https://user@pve:PASSWORD@the-new-cluster.com:443

Next, it needs configuration for the storage devices. They are expected to be reachable over SSH; both from the caller and from each other (using SSH-agent forwarding).

The following defines two storage devices for the banana-cluster, one shared and one local to node1 only.

If on sharedsan, the images are probably called something like /pool0/san/images/VMID/vm-VMID-disk1.qcow2, while in Proxmox, they are referred to as sharedsan:VMID/vm-VMID-disk1.qcow2.

[storage:banana-cluster:sharedsan] ; "sharedsan" is available on all nodes
ssh=root@san.banana-cluster.com
path=/pool0/san/images
temp=/pool0/san/private

[storage:banana-cluster:local@node1] ; local disk on node1 only
ssh=root@node1.banana-cluster.com
path=/srv/images
temp=/srv/temp

If you use ZFS storage on the-new-cluster, the storage bits could look like this. Disk volumes exist on the ZFS pool node1-ssd and node2-ssd on the nodes node1 and node2 respectively.

Note that the temp= path is always a regular path.

[storage:the-new-cluster:node1-ssd@node1]
ssh=root@node1.the-new-cluster.com
path=zfs:node1-ssd
temp=/node1-ssd/temp

[storage:the-new-cluster:node2-ssd@node2]
ssh=root@node2.the-new-cluster.com
path=zfs:node2-ssd
temp=/node2-ssd/temp

The config file looks better with indentation. The author suggests this layout:

[pve:banana-cluster]
...

  [storage:banana-cluster:sharedsan]
  ...
  [storage:banana-cluster:local@node1]
  ...

[pve:the-new-cluster]
...

  [storage:the-new-cluster:node1-ssd@node1]
  ...

License

proxmove is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3 or any later version.

Changes

  • v0.0.7 - 2016-10-07

    Bugs fixed:

    • Instead of trusting on the “size=XXG” which may or may not be present in the storage volume config, it reads the QCOW header or ZFS volume size directly. Also checks that the values are available before attempting a move.

  • v0.0.6 - 2016-09-21

    New features:

    • Add –bwlimit in Mbit/s to limit bandwidth during transfer. Will use the scp(1) -l option or for ZFS use the mbuffer(1) auxiliary. As an added bonus mbuffer may improve ZFS send/recv speed through buffering. Closes #4.

    • Add –skip-disks option to skip copying of the disks. Use this if you want to copy the disks manually. Closes #3.

    • Add –skip-start option to skip autostarting of the VM.

    • Adds optional pv(1) pipe viewer progress bar to get ETA in ZFS transfers.

    • Add hidden –debug option for more verbosity.

    • Add hidden –ignore-exists option that allows you to test moves between the same cluster by creating an alias (second config).

    Bugs fixed:

    • Format is not always specified in the properties. If it isn’t, use the image filename suffix when available.

    • Sometimes old values aren’t available in the “pending” list. Don’t croak. Closes #2.

    • Begun refactoring. Testing bettercodehub.com.

    • Also check whether temporary (renamed) VMs exist before starting.

  • v0.0.5 - 2016-08-30

    • Added support for ZFS to ZFS disk copy. QCOW2 to ZFS and ZFS to ZFS is now tested.

  • v0.0.4 - 2016-08-30

    • Major overhaul of configuration format and other changes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

proxmove-0.0.7.tar.gz (19.5 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page