Migrating a simple server from VMware ESXi to Proxmox PVE

vmware esxi to proxmox

I just migrated a rather small server from VMware ESXi 7.0 to Proxmox 8.1.3 hypervisor. The server had two virtual Windows 10 machines. I document the steps here, in case it might be helpful to someone in the same situation.

Step 1: Install a new, empty drive

Insert a fresh SSD in the server and disconnect the old drive’s SATA cable. That way, you ensure that the original data is not touched at all while we setup and prepare the Proxmox system.

Step 2: Prepare Proxmox USB drive

Download the Proxmox ISO image from the original website, and write it onto a USB stick. In my case, on macOS, I could easily use dd for that. First, use “diskutil list” to find the correct drive, and then execute the dd comand (make sure to unmount any partitions on the drive before):

sudo dd if=/path/to/proxmox.iso of=/dev/rdisk4 bs=8m

Step 3: Install Proxmox

Insert the USB drive into the server, and boot from it. The installation is quite straightforward, but in case you need more information, refer to the original Proxmox website here.

Just one note on the storage settings: In case you want a larger “root” partition for Proxmox, specify that NOW to avoid headaches later on. In my case, I set the following parameters in the Advanced LVM Configuration Options, which give me the maximum possible root partition size. Proxmox limits the maxroot to 1/4th of the hdsize, regardless if you enter larger values.

hdsize=894GB (full SSD size)
swapsize=8GB
maxroot=223GB
minfree=16GB
maxvz=(empty)

Now, boot Proxmox and access the web interface over the network. If you want, you can install the “Proxmox VE Post Install” script from tteck. You can find it here, under Proxmox VE Tools -> Proxmox VE Post Install. Just open a shell and paste the corresponding wget command:

bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/misc/post-pve-install.sh)"

Step 4: Mount the original drive containing the VMware *.vmdk files

To mount the VMware drive (ESXi uses a file system called VMFS), we need the vmfs6-tools package. Open a shell, and enter this command:

apt install vmfs6-tools

Next, find out the device name of the disk, by executing

fdisk -l

Once you found the correct drive, create a directory where you want to mount it, like this:

mkdir /mnt/vmfsdisk

And finally, mount it like this (mount will be read-only):

vmfs6-fuse /dev/sdXY /mnt/vmfsdisk

Hint (usually not needed): If your VMDK spans multiple extents, you can specify them one after another before the mountpoint: vmfs6-fuse extent_1 extent_2 … mount_point

If you now list the files on that drive, you should find the names of your virtual machines as directories:

ls -la /mnt/vmfsdisk

Step 5: Create a new VM in Proxmox

Before you can convert the disk image data, you will first need to create a new VM. The configuration of this VM should match that of the original VM on the VMware system. I am not sure if there is an easy way to automate this (e.g. a script which reads the vmx file and creates a new Proxmox VM), so I did it manually using the Proxmox webinterface.

  1. Tab “General”: Give the VM a name and a free ID.
  2. Tab “OS”: Choose the correct guest OS version, and optionally mount the VirtIO drivers in a CD ROM.
  3. Tab “System”: Depending on if the original VM used BIOS or UEFI, choose the correct settings here. In case of BIOS select i440fx, and in case of UEFI select q35. If it is an EFI, select OVMF (UEFI) under BIOS (under Firmware), and specify a Storage where the UEFI volume should be created. Keep the SCSI controller at “VirtIO SCSI single”.
  4. Tab “Disks”: The proper disk configuration is important. Create as many disks as the original VM had. Also make sure that you specify the correct size. Proxmox will then create the correct volumes in the LVM storage layer.
    IMPORTANT: If you migrate a VM containing an operating system that needs additional SCSI drivers to be installed (which is the case for Windows guests), you have to configure the disks as SATA now. Otherwise, the VM won’t boot from the disk.
  5. Tab “CPU”: Select the same number of cores as the original VM had.
  6. Tab “Memory”: Enter the same amount of RAM the original VM had.
  7. Tab “Network”: Configure the network as desired

Do not boot the VM yet.

Step 6: Migrate the data

The reason why we first created the new VM before migrating the data is that we now have the needed LVM logical volumes matching our previous volumes. We can now easily migrate the data using a simple command (one per volume).

But first, we need to determine the correct name of the volume. For that, execute “fdisk -l” again. This will list all volumes, including the logical LVM volumes.

With this information, we can now run the qemu-img command, which converts the initial VMDK file to RAW, and directly writes it to the correct LVM volume:

qemu-img convert -p \
  -f vmdk /mnt/vmfsdisk1/yourvm/yourdisk.vmdk \
  -O raw /dev/mapper/pve-vm--100--disk--1

Repeat this for every volume you want to migrate.

One note about the target format of the conversion from Proxmox documentation:

Raw file format provides slightly better performance while qcow2 offers advanced features such as copy on write and Live_Snapshots. Since V2.3, qcow2 is the default format.

Proxmox documentation about Windows 10 guest best practices
https://pve.proxmox.com/wiki/Windows_10_guest_best_practices#Further_information

Step 7: Boot the VM

Now comes the moment of truth. Start the VM and see if it boots!

You can now install the VirtIO drivers, and uninstall the VMware Tools.

Leave a Reply