Using GRE-TAP tunnel to route IPs from OVH VPS to a Proxmox hypervisor
Inspired by this tutorial: https://0x4a414e.space/2018/03/how-to-easily-reroute-ip-address-to-almost-any-server/
It is very helpful, except it does not cover correctly starting the gretap tunnel on boot + configuring VMs using cloudinit.
This guide is kind of hacky and doesn't seem to work 100% of the time... for now, https://wiki.buyvm.net/doku.php/gre_tunnel remains the way to go
This guide will refer to the server whose IPs need to be routed to the hypervisor as a "router". The hypervisor is the machine which will host the VMs using the router.
It was tested using a VPS from OVH with extra IPs and a dedicated server from ShockHosting.
In this case, 18.104.22.168 is the IP that will be tunneled from the OVH VPS to the proxmox hypervisor.
In my configuration, the router is running Debian 10, simply because it's a lot easier to do this with ifupdown rather than netplan. Make sure the net-tools package is installed and "net.ipv4.ip_forward=1" is set in sysctl.conf.
Append the following configuration to the /etc/network/interfaces file on the router (replace the IPs with the correct IP addresses of your machines, and "ens3" with the correct canonical name of your WAN interface):
Router VM (OVH)
... auto gre1 iface gre1 inet static address 172.17.0.1 netmask 255.255.255.252 # GRE tunnel establish pre-up ip link add gre1 type gretap local 22.214.171.124 remote 126.96.36.199 ttl 255 post-down ip link del gre1 # Add IPs to tunnel post-up ip ro a 188.8.131.52/32 dev gre1 pre-up arp -s 184.108.40.206 $(cat /sys/class/net/ens3/address) -i ens3 pub ...
This creates a new interface called "gre1" which is running a GRETAP tunnel. Additionally, the additional failover IPs are routed through the gre1 interface.
Reboot the router server and check "ip link". You should see an interface called "gre1" with an MTU of 1462.
In my configuration, the hypervisor is running Proxmox 6.2-3 on Debian 10. I chose to use a hacky rc.local instead of /etc/network/interfaces because there seems to be a race condition (or something similar) that leads to the GRE tunnel not being established about half the time.
First, create a new bridge by appending the following configuration to your /etc/network/interfaces:
... auto vmbr2 iface vmbr2 inet static bridge-ports gre1 bridge-stp off bridge-fd 0 ...
Then, create a systemd service that runs the /etc/rc.local script on every boot:
[Unit] Description=/etc/rc.local Compatibility ConditionPathExists=/etc/rc.local [Service] Type=forking ExecStart=/etc/rc.local start TimeoutSec=0 StandardOutput=tty RemainAfterExit=yes SysVStartPriority=99 [Install] WantedBy=multi-user.target
Make sure to systemctl daemon-reload & systemctl enable the rc-local service.
Then place the following script into /etc/rc.local, again replacing the IPs with your machines' IPs, and make it executable:
#!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. # Note: 220.127.116.11 is the external IP of your hypervisor, 18.104.22.168 is the external IP of your OVH machine ip link add gre1 type gretap local 22.214.171.124 remote 126.96.36.199 ttl 255 ip link set gre1 up ip a a 172.17.0.2/30 dev gre1 ifup vmbr2 exit 0
This creates a GRETAP tunnel on the hypervisor and also forces the newly-created Linux bridge to be started AFTER the tunnel is created (otherwise it will fail, because the gre1 interface won't exist).
Yep, it's really hacky.
Finally, reboot the hypervisor and verify that a new interface called gre1 exists in the output of ip link.
After both machines are rebooted, make sure the router can ping 172.17.0.2, and the hypervisor can ping 172.17.0.1. If it does not work, double check your configs and that the gre1 interface exists and is up on both machines.
I prefer to use cloudinit to configure my VMs, and this guide will only provide a cloudinit configuration for VMs. Do note that Cloudinit version 2 network configs can be directly added to netplan and it will work.
In the Proxmox datacenter view, add the "snippets" role to one of your storages (or create a new storage for snippets). I used the default "local" storage to hold snippets. This will create a directory called path/to/storage/snippets (or on the default local storage, /var/lib/vz/snippets). In that directory, create a new file ending with .yml (.yaml will not work!) and add the following content, again replacing the placeholder IPs with your relevant addresses.
Cloudinit Network.yml example
version: 2 ethernets: eth0: addresses: - 188.8.131.52/32 gateway4: 172.17.0.1 match: macaddress: "0E:81:4A:F8:CA:E7" mtu: 1462 nameservers: addresses: - 184.108.40.206 - 220.127.116.11 set-name: eth0 routes: - to: 0.0.0.0/0 via: 172.17.0.1 on-link: true
The last bit is the important part, which make sure that a default route is created in the VM and cannot be achieved using normal Cloudinit autogenerated configuration. This is not needed for Fedora.
It's also important to set the MTU to 1462 (same as the gre tunnels), otherwise connections will be extremely slow due to packet fragmentation.
Set your VM to use the custom cloudinit network config by executing qm set <vmid> --cicustom "network=your-storage:snippets/your-file.yml"
Now, boot your VM and it should be able to access the internet. Hooray, it works!