Inspired by this tutorial: https://0x4a414e.space/2018/03/how-to-easily-reroute-ip-address-to-almost-any-server/
It is very helpful, except it does not cover correctly starting the gretap tunnel on boot + configuring VMs using cloudinit.
This guide is kind of hacky and doesn't seem to work 100% of the time... for now, https://wiki.buyvm.net/doku.php/gre_tunnel remains the way to go
This guide will refer to the server whose IPs need to be routed to the hypervisor as a "router". The hypervisor is the machine which will host the VMs using the router.
It was tested using a VPS from OVH with extra IPs and a dedicated server from ShockHosting.
In this case, 22.214.171.124 is the IP that will be tunneled from the OVH VPS to the proxmox hypervisor.
In my configuration, the router is running Debian 10, simply because it's a lot easier to do this with ifupdown rather than netplan. Make sure the net-tools package is installed and "net.ipv4.ip_forward=1" is set in sysctl.conf.
Append the following configuration to the /etc/network/interfaces file on the router (replace the IPs with the correct IP addresses of your machines, and "ens3" with the correct canonical name of your WAN interface):
Router VM (OVH)
iface gre1 inet static
# GRE tunnel establish
pre-up ip link add gre1 type gretap local 126.96.36.199 remote 188.8.131.52 ttl 255
post-down ip link del gre1
# Add IPs to tunnel
post-up ip ro a 184.108.40.206/32 dev gre1
pre-up arp -s 220.127.116.11 $(cat /sys/class/net/ens3/address) -i ens3 pub
This creates a new interface called "gre1" which is running a GRETAP tunnel. Additionally, the additional failover IPs are routed through the gre1 interface.
Reboot the router server and check "ip link". You should see an interface called "gre1" with an MTU of 1462.
In my configuration, the hypervisor is running Proxmox 6.2-3 on Debian 10. I chose to use a hacky rc.local instead of /etc/network/interfaces because there seems to be a race condition (or something similar) that leads to the GRE tunnel not being established about half the time.
First, create a new bridge by appending the following configuration to your /etc/network/interfaces:
iface vmbr2 inet static
Then, create a systemd service that runs the /etc/rc.local script on every boot:
Make sure to systemctl daemon-reload & systemctl enable the rc-local service.
Then place the following script into /etc/rc.local, again replacing the IPs with your machines' IPs, and make it executable:
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# By default this script does nothing.
# Note: 18.104.22.168 is the external IP of your hypervisor, 22.214.171.124 is the external IP of your OVH machine
ip link add gre1 type gretap local 126.96.36.199 remote 188.8.131.52 ttl 255
ip link set gre1 up
ip a a 172.17.0.2/30 dev gre1
This creates a GRETAP tunnel on the hypervisor and also forces the newly-created Linux bridge to be started AFTER the tunnel is created (otherwise it will fail, because the gre1 interface won't exist).
Yep, it's really hacky.
Finally, reboot the hypervisor and verify that a new interface called gre1 exists in the output of ip link.
After both machines are rebooted, make sure the router can ping 172.17.0.2, and the hypervisor can ping 172.17.0.1. If it does not work, double check your configs and that the gre1 interface exists and is up on both machines.
I prefer to use cloudinit to configure my VMs, and this guide will only provide a cloudinit configuration for VMs. Do note that Cloudinit version 2 network configs can be directly added to netplan and it will work.
In the Proxmox datacenter view, add the "snippets" role to one of your storages (or create a new storage for snippets). I used the default "local" storage to hold snippets. This will create a directory called path/to/storage/snippets (or on the default local storage, /var/lib/vz/snippets). In that directory, create a new file ending with .yml (.yaml will not work!) and add the following content, again replacing the placeholder IPs with your relevant addresses.
Cloudinit Network.yml example
- to: 0.0.0.0/0
The last bit is the important part, which make sure that a default route is created in the VM and cannot be achieved using normal Cloudinit autogenerated configuration. This is not needed for Fedora.
It's also important to set the MTU to 1462 (same as the gre tunnels), otherwise connections will be extremely slow due to packet fragmentation.
Set your VM to use the custom cloudinit network config by executing qm set <vmid> --cicustom "network=your-storage:snippets/your-file.yml"
Now, boot your VM and it should be able to access the internet. Hooray, it works!