I’ve spent most of my day today trying a plethora of different ways on how to configure Proxmox on a Hetzner server with multiple IP addresses. Most of the tutorials I found online gave a good deal of information but where lacking in one or two crucial details. Shortly before I was ready to throw my computer out of the window, I had success and managed to get everything set up the way I wanted it.
This aims to be the definitive guide on how to accomplish the aforementioned task. When ready the setup includes the following features:
- Host bound to main IPv4 address, that comes with the server (and one of the 18,446,744,073,709,551,616 included IPv6 addresses)
- Every IPv4 address of a separately delegated subnet usable for virtual machines
- Internal private network for inter-virtual machine communication and non publicly accessible VMs
I am describing here what WORKED FOR ME. I don’t know if each of the steps is according to best practice or even most optimal in terms of security and/or performance. If you have any further information, or can spot a mistake I made, I urge you to reach out in the comment section or via Twitter and I’ll be happy to amend this article for the betterment of all of human knowledge.
Installing Proxmox isn’t very hard. Just make sure to make a clean Debian install on a reasonably beefy machine matching your Debian version to the latest recommended one by Proxmox. After initial setup is done and your base system is running create the file ‘/etc/apt/sources.list.d/proxmox.list’ and paste in the following line to update your apt sources:
Then add the repository key to your system with the following command:
| apt-key add -
Next update and upgrade your system:
apt update && apt upgrade
Now it’s time to install the Proxmox kernel and headers. Don’t worry if the ones here are outdated when you read this, those packages will be automatically upgraded during the whole installation process.
apt install pve-firmware pve-kernel-4.4.8-1-pve pve-headers-4.4.8-1-pve
Now reboot your system to load the new kernel.
After the system is up and running again you can install Proxmox VE with the following command:
apt-get install proxmox-ve
After this step is done, reboot again.
Great now you have Proxmox running and could in theory start creating VMs and containers like crazy. But the part that took me the longest to figure out is till to come. I wish I had this blog post a few hours ago.
Configuring the network
To use all available IP addresses of your delegated IPv4 subnet on a Hetzner dedicated server you need to set up a bridge on your host computer.
Edit the file ‘/etc/network/interfaces’ and fill in the data you received from Hetzner. Ignore the first two definitions for loopback and IPv6 loopback and match your ‘eth0’ configuration to the following:
auto eth0 iface eth0 inet static address <YOUR MAIN IP> netmask 255.255.255.224 gateway <YOUR GATEWAY> up route add -net <YOUR NET> netmask 255.255.255.224 gw <YOUR GATEWAY> eth0
Now these settings should all be pretty much already be in there from Hetzner’s automatic installimage configuration during the initial operating system installation. Next is the host’s IPv6 configuration:
iface eth0 inet6 static address <ONE OF YOUR IPv6 ADDRESSES> netmask 128 gateway fe80::1
Again these lines should already be existent in your network configuration courtesy of some nice Hetzner engineer that pre-seeds all standard installs with automatic IP configurations. Only you should lower the netmask, because this interface now only participates in the subnet and doesn’t swallow up all of the IPv6 addresses. The whole subnet will be assigned to the bridge below.
Now it’s time to create our first bridge that will connect our host to any virtual machines running on it and them to the outside world.
auto vmbr0 iface vmbr0 inet static address <YOUR MAIN IP> netmask 255.255.255.255 bridge_ports none bridge_stp off bridge_fd 0 bridge_maxwait 0 pre-up brctl addbr vmbr0 up ip route add <FIRST IP FROM YOUR SUBNET>/32 dev vmbr0 up ip route add <SECOND IP FROM YOUR SUBNET>/32 dev vmbr0 ...
Add one ‘up ip route add …’ line per IPv4 address out of your delegated subnet. That way all of them will be available on the bridge and everyone can talk to everyone.
Also add the following to be able to route IPv6 addresses to and from your virtual machines:
iface vmbr0 inet6 static address <YOUR MAIN IPv6 ADDRESS> netmask 64
This takes care of the all publuc IPv4 and IPv6 addresses at your disposal on the host level. Later we’ll examine how to properly configure your guests to match these settings. But first we will create another bridge on the host to…
Set up internal networking
It’s quite nice to outfit each virtual machine with two virtual network interfaces and have the second one connected to a private network that’s only reachable from your host and all virtual machines. That way you can, for instance, create database, caching or worker machines that need no public IP address and will never be seen on the open internet.
So still in ‘/etc/network/interfaces’ add the following at the end:
auto vmbr1 iface vmbr1 inet static address 10.20.30.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up iptables -t nat -A POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE
This configures the internal communication. The last two lines take care of NAT (Network Address Translation), so your “private” VMs can be connect to the internet, for instance to install software via apt and download updates. This way virtual machines can connect to the outside from within, but can’t be directly reached from the internet, exactly like your home computer behind a router.
Since our host acts as a router we have to make sure it’s kernel has all IP packet forwarding features activated. Take a look at ‘/etc/sysctl.conf’ and make sure that the following two lines aren’t commented out:
Lastly make sure your host won’t send ICPM “redirect” messages to guests, telling them to find the gateway by themselves. This won’t work with our particular network setup. Add the following to ‘/etc/sysctl.con’:
This concludes the host configuration. You can now reboot the server one last time, or activate the new settings by writing the sysctl settings directly and restarting the network stack. Now on to the last step.
Now this is different depending on the method you choose to create a VM. Proxmox offers LXC containers and fully virtualised machines. Depending on your needs both have their advantages and drawbacks, but discussing them is far outside the scope of these lines.
Setting up a container
First we’ll inspect creating a container from within Proxmox’s webinterface. Point your browser to ‘https://<YOUR MAIN IP>:8006’ (take care to actually type the https part, or your browser won’t know how to connect) and log in with your Linux user credentials. You need to download a container template (like a system image) initially before you can create your first container.
- Expand the “Datacenter” node in the left sidebar menu
- Click on the “local” storage group
- Find the button named “Templates” in the content area and click it
- Choose your preferred template, “debian-8.0-standard” for instance.
- Click the “Download” button
Now that your template is available let’s move on to actually spinning up the container.
- Click on “Create CT” in the upper right corner menu
- Choose a good hostname
- Set up a root password
- On the “Template” tab choose your just downloaded template
- On “Root Disk” choose an appropriate disk size
- Under “CPU” select the desired number of CPUs
- Assign enough “Memory”
The next Tab is the most interesting one: “Network”
- Make sure you’re configuring ‘eth0’
- Leave the MAC address field empty to get a new randomly created from Proxmox
- Choose bridge “vmbr0”
- Leave “VLAN Tag” and “Rate limit” alone for now
- Choose “static” IPv4
- Insert one of your subnet’s IPv4 addresses into “IPv4/CDR” and add the suffix “/32” behind it (e.g. 192.0.2.2/32)
- Type in your hosts main IP address into the “Gateway (IPv4)” filed. The one we set up waaaay back in our eth0 configuration in ‘/etc/network/interfaces’ on the host
- Choose “static” IPv6
- Insert one of your 18,446,744,073,709,551,616 IPv6 addresses, if you can find one you haven’t used yet
- As “Gateway (IPv6)” insert the IPv6 address you assigned to eth0 and vmbr0 on the host.
The next two Tab “DNS” and Confirm don’t have any interesting settings and you can leave them pretty much alone. Now you have a container that’s ready to run and can be accessed directly via public IP over the internet.
Now if you want and need private communication, just add a second network interface through the web GUI to your container (click on your container on the left menu and choose network -> add) and give it an IP address in the configured subnet, 10.20.30.2 for example. For the Gateway type in your host’s bridge private IP, 10.20.30.1 in our example. Lastly bind that interface to the private bridge ‘vmbr1’.
Setting up a virtual machine
This is a bit more involved and I’ll describe the way I did it. As mentioned before, there are probably better ways, so please don’t hold back if you have anything to add.
Since there are no downloadable templates for virtual machines, you have to reach out to your preferred operating system’s install media and acquire it as an ISO disk image file. I choose Debian 8 minimal, which is downloadable via https://www.debian.org/CD/netinst
- Again select the “local” storage node in the left hand menu
- Click on “Upload” in the main content area
- Upload your ISO image
- Click on “Create VM” in the upper right corner
- Start with choosing a witty hostname
- On “OS” choose “Linux 4.X/3.X/2.6 Kernel” if you’re following along with a Debian guest system
- On “CD/DVD” select “Use CD/DVD disc image file (iso)” and choose your just uploaded ISO image
- “Hard Disk”, “CPU” and “Memory” should be pretty self explanatory. Choose whatever you deem necessary for your VM
- On the network tab make sure to choose “Bridged mode” and select “vmbr0”
- Under “Model” select “VirtIO” if your guest OS supports it
- Confirm your settings and click “Finish”
Now comes the tricky part. Start you newly created virtual machine and select “Console” in the upper menu. This will start a virtual console (Java browser plugin unfortunately required) which let’s you start the installation. Now Debians minimal install is pretty easy to follow, the only thing is: we can’t configure the network during installation because our gateway will be outside of the configured subnet and the installer doesn’t provide for this setup. I found a very old discussion on Debian’s bugtracking about that topic, but the conclusion was pretty much not to alter the installer because people that need this, will find other ways.
So finish the installation without any network access and reboot the virtual machine. After logging in via the virtual console again, edit ‘/etc/network/interfaces’ again – this time on the guest system and fill it with the following values:
auto lo iface lo inet loopback auto eth0 iface eth0 inet static address <ONE OF YOUR SUBNET'S IPs> netmask 255.255.255.255 dns-nameservers 126.96.36.199 188.8.131.52 184.108.40.206 post-up ip route add <YOUR MAIN IP> dev eth0 post-up ip route add default via <YOUR MAIN IP> dev eth0 pre-down ip route del default via <YOUR MAIN IP> dev eth0 pre-down ip route del <YOUR MAIN IP> dev eth0 iface eth0 inet6 static address <ONE OF YOUR IPv6 ADDRESSES> netmask 64 gateway <YOUR MAIN IPv6 ADDRESS> auto eth1 iface eth1 inet static address 10.20.30.3 netmask 255.255.255.0 gateway 10.20.30.1
After you reboot your VM you should now be able to reach it from the internet as well as be able to communicate with your host and other VMs and containers via your private network. Now if you want a private network only VM or container, just remove eth0 from the web GUI or delete it from ‘/etc/network/interfaces’ and you should be left with a fully NATed machine.
This article has become much longer than I anticipated but that holds true for the whole setup of Proxmox with public IPs that I went through today as well, so I guess it’s only fitting.
If you followed this description you should now have a fully IPv4 and IPv6 capable VM cluster that’s neatly routed via your host and out to the internet in case of the public bridge or segregated via NAT in case of the private bridge. I guess this setup is flexible enough to accommodate a broad range of virtual computing needs. Anything more involved would probably herald a dedicated routing VM using Vayatta/vOS, pfSense or something similar. I decided that this would be overkill for my requirements and I’d rather spend the time figuring out how to properly use Linux’ internals to set up everything.
Please feel free to chime in on how you solved those problems or if you spotted anything in my words that could be improved, optimised or amended.