But with the change from OpenVZ to LXC come some changes which make a migration less trivial (depending of the amount of features that are in use with OpenVZ).
The Proxmox Wiki has some good information on migrating from Proxmox 3 to Proxmox 4. I have a couple of issue's I ran into during a couple of migrations that I will share here. Maybe it is of use to someone else. Keep in mind I only run Debian containers, so other distributions might have other issue's.
When running a larger amount of containers, it is quite possible you're going to run out of inotify instances. Default this is set to 128 (at least it was on my system). After the first migration I started to get into issues after restoring a couple of containers. The errors pointed to this setting. After increasing the value the problems went away:
If this helps, don't forget to make it permanent in /etc/sysct.conf
When running ping in an LXC container as a non-root user, I got the error:
As it turns out, no special capbilities were set on /bin/ping:
Normally ping has cap_net_raw+ep (net raw capabitlity, effective and permitted). After restoring this, all works again.
The problem is that when using vzdump and vzrestore to migrate the containers from OpenVZ to LXC the capabilities get lost. So when migrating it can be very useful to generate a list of files which have extra capabilities so they can be restored later.
What I did on the in-place migrations was to create a dump of all capabilites for all containers before starting the migration. You'll need to have libcap2-bin installed on the Proxmox host.
This generates a file with all files per container which require different capabilities. The files are placed in /var/lib/vz.
After finishing up the migration to Proxmox 4 and all containers are running, I run the following to restore the capabilities.
In some containers running Debian 8 (systemd) I ran into a problem where some processes didn't start. After some debugging it turned out to be an issue with systemd settings for that application. The problem seems to be with the following setting:
I found this to be in use with powerdns. Changing it to false solves the problem.
This has security implications, so make sure you're comfortable changing this.
Another systemd setting that can be an issue is:
Chaning this to true seems to help.
This one came along when migrating an OpenVPN container. It needs access to /dev/net/tun to create the tunnels. Setting this up is easey, but different from Openvz.
You need to create a shellscript that create the required devices when the container starts. The configuration of the container needs to refer to this script. I use the following code, of course replacing the CTID by the correct container id.
After this change the container needs to be (re)started.