Skip to main content

Using GRE-TAP tunnel to route IPs from OVH VPS to a Proxmox hypervisor

Inspired by this tutorial:

It is very helpful, except it does not cover correctly starting the gretap tunnel on boot + configuring VMs using cloudinit.

This guide is kind of hacky and doesn't seem to work 100% of the time... for now, remains the way to go


This guide will refer to the server whose IPs need to be routed to the hypervisor as a "router". The hypervisor is the machine which will host the VMs using the router.

It was tested using a VPS from OVH with extra IPs and a dedicated server from ShockHosting.

In this case, is the IP that will be tunneled from the OVH VPS to the proxmox hypervisor.

Router Configuration

In my configuration, the router is running Debian 10, simply because it's a lot easier to do this with ifupdown rather than netplan. Make sure the net-tools package is installed and "net.ipv4.ip_forward=1" is set in sysctl.conf.

Append the following configuration to the /etc/network/interfaces file on the router (replace the IPs with the correct IP addresses of your machines, and "ens3" with the correct canonical name of your WAN interface):

Router VM (OVH)

auto gre1
iface gre1 inet static

        # GRE tunnel establish
        pre-up ip link add gre1 type gretap local remote ttl 255
        post-down ip link del gre1 
        # Add IPs to tunnel
        post-up ip ro a dev gre1
		pre-up arp -s $(cat /sys/class/net/ens3/address) -i ens3 pub

This creates a new interface called "gre1" which is running a GRETAP tunnel. Additionally, the additional failover IPs are routed through the gre1 interface.

Reboot the router server and check "ip link". You should see an interface called "gre1" with an MTU of 1462.

Hypervisor Configuration

In my configuration, the hypervisor is running Proxmox 6.2-3 on Debian 10. I chose to use a hacky rc.local instead of /etc/network/interfaces because there seems to be a race condition (or something similar) that leads to the GRE tunnel not being established about half the time.

First, create a new bridge by appending the following configuration to your /etc/network/interfaces:

PVE Hypervisor

auto vmbr2
iface vmbr2 inet static
        bridge-ports gre1
        bridge-stp off
        bridge-fd 0

Then, create a systemd service that runs the /etc/rc.local script on every boot:


Description=/etc/rc.local Compatibility

ExecStart=/etc/rc.local start


Make sure to systemctl daemon-reload & systemctl enable the rc-local service.

Then place the following script into /etc/rc.local, again replacing the IPs with your machines' IPs, and make it executable:


#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.

# Note: is the external IP of your hypervisor, is the external IP of your OVH machine
ip link add gre1 type gretap local remote ttl 255
ip link set gre1 up
ip a a dev gre1
ifup vmbr2
exit 0

This creates a GRETAP tunnel on the hypervisor and also forces the newly-created Linux bridge to be started AFTER the tunnel is created (otherwise it will fail, because the gre1 interface won't exist).

Yep, it's really hacky.

Finally, reboot the hypervisor and verify that a new interface called gre1 exists in the output of ip link.


After both machines are rebooted, make sure the router can ping, and the hypervisor can ping If it does not work, double check your configs and that the gre1 interface exists and is up on both machines.

VM Configuration

I prefer to use cloudinit to configure my VMs, and this guide will only provide a cloudinit configuration for VMs. Do note that Cloudinit version 2 network configs can be directly added to netplan and it will work.

In the Proxmox datacenter view, add the "snippets" role to one of your storages (or create a new storage for snippets). I used the default "local" storage to hold snippets. This will create a directory called path/to/storage/snippets (or on the default local storage, /var/lib/vz/snippets). In that directory, create a new file ending with .yml (.yaml will not work!) and add the following content, again replacing the placeholder IPs with your relevant addresses.

Cloudinit Network.yml example

    version: 2
                macaddress: "0E:81:4A:F8:CA:E7"
            mtu: 1462
            set-name: eth0
            - to:
              on-link: true

The last bit is the important part, which make sure that a default route is created in the VM and cannot be achieved using normal Cloudinit autogenerated configuration. This is not needed for Fedora.

It's also important to set the MTU to 1462 (same as the gre tunnels), otherwise connections will be extremely slow due to packet fragmentation.

Set your VM to use the custom cloudinit network config by executing qm set <vmid> --cicustom "network=your-storage:snippets/your-file.yml"

Now, boot your VM and it should be able to access the internet. Hooray, it works!

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.