Bitwarden_rs over wireguard vpn

Hello everyone,

I need some help to make bitwarden_rs respond to connection requests coming through a wireguard VPN network. I am building a mini hybrid/home cloud. I have rented an Ubuntu VM with public ip address. It’s used as a wireguard server. Then I spinned up a couple of Ubuntu VMs on my notebook (node1 and node2), wihch are set up as wireguard client nodes behind NAT. In reality these VMs are in one and the same network and they are able to ping and access each other both with and without wireguard VPN.

I ran bitwarden_rs on node1. I tried doing that by using the docker image, by extracting the binaries from a running container, by building bitwarden_rs locally (bitwarden_rs 1.20.0-a82c0491) and installing it as a systemd service. Whatever I tried I always faced a blocking issue, i.e. when a HTTP request is sent through the wireguard VPN for bitwarden_rs to process, the connection request would not be accepted and would eventually time out. On the other hand, bitwarden_rs returns valid HTML documents, when I access the service from node2 over their common network or just curl the server from its own VM.

I tried tweaking the ROCKET_ADDRESS environment variable to allow connections from every host (0.0.0.0), from localhost only (127.0.0.1) where I set up a Caddy reverse proxy on the same VM. I tried to bind the server to a wireguard network address in the same way. The issue persists. The bitwarden_rs (or maybe Rocket under the hood) would not accept any connection requests through the wireguard network.

I have tried this same setup with running the python built-in web server on node1 instead of bitwarden_rs. It all worked end-to-end and across all network interfaces, the wireguard one included. This makes me think that the root cause is somehow related to bitwarden_rs.

Has anyone faced an issue like this? Can someone help me with troubleshooting it further?

This issue is resolved. The root cause had nothing to do with bitwarden_rs. It’s related to the default MTU for wireguard network interfaces that the systemd wg-quick service sets. If you change it to 1280 with all peers in a wireguard VPN, things magically start to work as expected with the HTTP connectivity between peers.

Did you find that by chance or is there a longer explanation about it?

I am asking because I had all kind of issues with MTU and wireguard but never found any more throughout discussion.

Where Wireguard is an encapsulated VPN, it suffers the same issue as other protocols - there is no single ‘correct value’ it is dependent on the route that the packets take, but a figure such as 1280 is quite common.

If you want something ‘safe’, go with 1280. That will maintain maximum compatibility by meeting the IPv6 minimum for the encapsulated traffic, whilst using the smallest possible outer packet size. When I’m mobile using a work laptop, I end up running wireguard to home over my corporate VPN - and I have to drop to 1380.

Fixing an old hack - why we are bumping the IPv6 MTU is a little out of date, but covers the reasoning for some different values.

An example of a link type requiring a drop below the default 1420 is PPPoE (without jumbo frames), where there are 8 extra bytes of header, and therefore the encrypted data needs to be smaller to allow the full packet/frame to stay within ethernet standards.

In addition to @rauxon 's explanation, the following materials made me think that I should be experimenting with MTU in order to fix the weird blip in HTTP connectivity.

  1. WireGuard MTU fixes - Kerem Erkan
  2. Set mtu to 1280 · Issue #40 · ViRb3/wgcf · GitHub

At that point I was desperate and ready to give it a try. It worked and I set up MTU of 1280 for all peers in my VPN.