Skip to main content

Using GRE-TAP tunnel to route IPs from OVH VPS to a Proxmox hypervisor

Inspired by this tutorial: https://0x4a414e.space/2018/03/how-to-easily-reroute-ip-address-to-almost-any-server/

It is very helpful, except it does not cover correctly starting the gretap tunnel on boot + configuring VMs using cloudinit.

This guide is kind of hacky and doesn't seem to work 100% of the time... for now, https://wiki.buyvm.net/doku.php/gre_tunnel remains the way to go

Intro

This guide will refer to the server whose IPs need to be routed to the hypervisor as a "router". The hypervisor is the machine which will host the VMs using the router.

It was tested using a VPS from OVH with extra IPs and a dedicated server from ShockHosting.

In this case, 1.2.3.4 is the IP that will be tunneled from the OVH VPS to the proxmox hypervisor.


Router Configuration

In my configuration, the router is running Debian 10, simply because it's a lot easier to do this with ifupdown rather than netplan. Make sure the net-tools package is installed and "net.ipv4.ip_forward=1" is set in sysctl.conf.

Append the following configuration to the /etc/network/interfaces file on the router (replace the IPs with the correct IP addresses of your machines, and "ens3" with the correct canonical name of your WAN interface):

Router VM (OVH)

CODE
...
auto gre1
iface gre1 inet static
        address 172.17.0.1
        netmask 255.255.255.252

        # GRE tunnel establish
        pre-up ip link add gre1 type gretap local 142.44.146.70 remote 107.161.50.74 ttl 255
        post-down ip link del gre1 
        # Add IPs to tunnel
        post-up ip ro a 1.2.3.4/32 dev gre1
		pre-up arp -s 1.2.3.4 $(cat /sys/class/net/ens3/address) -i ens3 pub
...

This creates a new interface called "gre1" which is running a GRETAP tunnel. Additionally, the additional failover IPs are routed through the gre1 interface.

Reboot the router server and check "ip link". You should see an interface called "gre1" with an MTU of 1462.


Hypervisor Configuration

In my configuration, the hypervisor is running Proxmox 6.2-3 on Debian 10. I chose to use a hacky rc.local instead of /etc/network/interfaces because there seems to be a race condition (or something similar) that leads to the GRE tunnel not being established about half the time.

First, create a new bridge by appending the following configuration to your /etc/network/interfaces:

PVE Hypervisor

CODE
...
auto vmbr2
iface vmbr2 inet static
        bridge-ports gre1
        bridge-stp off
        bridge-fd 0
...


Then, create a systemd service that runs the /etc/rc.local script on every boot:

rc-local.service

TEXT
[Unit]
Description=/etc/rc.local Compatibility
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target

Make sure to systemctl daemon-reload & systemctl enable the rc-local service.


Then place the following script into /etc/rc.local, again replacing the IPs with your machines' IPs, and make it executable:

rc.local

CODE
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

# Note: 3.4.5.6 is the external IP of your hypervisor, 2.3.4.5 is the external IP of your OVH machine
ip link add gre1 type gretap local 3.4.5.6 remote 2.3.4.5 ttl 255
ip link set gre1 up
ip a a 172.17.0.2/30 dev gre1
ifup vmbr2
exit 0

This creates a GRETAP tunnel on the hypervisor and also forces the newly-created Linux bridge to be started AFTER the tunnel is created (otherwise it will fail, because the gre1 interface won't exist).

Yep, it's really hacky.


Finally, reboot the hypervisor and verify that a new interface called gre1 exists in the output of ip link.

Testing

After both machines are rebooted, make sure the router can ping 172.17.0.2, and the hypervisor can ping 172.17.0.1. If it does not work, double check your configs and that the gre1 interface exists and is up on both machines.


VM Configuration

I prefer to use cloudinit to configure my VMs, and this guide will only provide a cloudinit configuration for VMs. Do note that Cloudinit version 2 network configs can be directly added to netplan and it will work.


In the Proxmox datacenter view, add the "snippets" role to one of your storages (or create a new storage for snippets). I used the default "local" storage to hold snippets. This will create a directory called path/to/storage/snippets (or on the default local storage, /var/lib/vz/snippets). In that directory, create a new file ending with .yml (.yaml will not work!) and add the following content, again replacing the placeholder IPs with your relevant addresses.

Cloudinit Network.yml example

YML
    version: 2
    ethernets:
        eth0:
            addresses:
            - 1.2.3.4/32
            gateway4: 172.17.0.1
            match:
                macaddress: "0E:81:4A:F8:CA:E7"
            mtu: 1462
            nameservers:
                addresses:
                - 1.1.1.1
                - 1.0.0.1
            set-name: eth0
            routes:
            - to: 0.0.0.0/0
              via: 172.17.0.1
              on-link: true

The last bit is the important part, which make sure that a default route is created in the VM and cannot be achieved using normal Cloudinit autogenerated configuration. This is not needed for Fedora.

It's also important to set the MTU to 1462 (same as the gre tunnels), otherwise connections will be extremely slow due to packet fragmentation.


Set your VM to use the custom cloudinit network config by executing qm set <vmid> --cicustom "network=your-storage:snippets/your-file.yml"


Now, boot your VM and it should be able to access the internet. Hooray, it works!


JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.