Running highly-available Vaultwarden

Hello!

I’m Lester, the maintainer of the Vaultwarden Helm chart https://github.com/guerzon/vaultwarden.

I keep getting approached by users who want to run multiple replicas of Vaultwarden in Kubernetes. I would like to reopen the discussion about running multiple replicas of Vaultwarden, specifically to discuss the current blockers for running Vaultwarden in Kubernetes.

  1. Database - I believe there will be no issue with the database (at least in PostgreSQL).
  2. Data directory containing the attachments, icons cache, and temporary files - would be great if we can use S3. Most Kubernetes users either have no way of using a shared filesystem such as NFS or are simply against it. Currently haven’t found any PR and feature requests to add S3 support.
  3. ?

Anything else I’m missing?

I am aware of the workaround to disable icons and attachments, but these features might still be important to some organizations.

I would raise a feature request based on this discussion.

Thanks in advance,
Lester

There are several posts about this topic in regards of HA.

One item you miss from your list is websocket notifications will not work when using more then one pod since there is no internal communication between the pods and thus not all users will receive an update.

You are always allowed to open a discussion in the idea section regarding this. But from my side it’s not giving to have any priority. It takes a lot of work to build s3 support into Vaultwarden.

Maybe you can check for sidecar solutions which sync with s3 or use something like rclone. Or maybe even s3fs or something.

Maybe mountpoint-s3/docker at main · awslabs/mountpoint-s3 · GitHub could be helpful too.

Thanks @BlackDex, will check these out.

Hi, even though I haven’t deployed Vaultwarden on Kubernetes and have only deployed Vaultwarden manually I would like to help you by offering a different point of view.

As far as the database goes it shouldn’t be a problem, I have used MariaDB with the Galera module and it worked really well, I have also used another setup with Pacemaker and DRBD to make a two node cluster which replicated the database storage.

When it comes to the data directory I think that a shared storage solution would be ideal, I don’t know the specifics of the deployments nor the technical limitations, but I would suggest something like Ceph or GlusterFS if they are are willing to manage and maintain another service, but if they are not willing to do that I think that using LINSTOR/DRBD would be a great idea, it would replicate the data directory across all nodes, I have used it in my latest project, (https://www.youtube.com/watch?v=vyXpox_M4hA), it works :+1:. I have found some resources that might help you:
https://blog.palark.com/kubernetes-storage-performance-linstor-ceph-mayastor-vitastor/
Using DRBD Block Devices for Kubevirt - LINBIT

And for the websocket notifications I’m afraid I cannot help you, I also had multiple instances of vaultwarden running and I couldn’t fix it, the closest thing I got was setting a session cookie stored on the client for each server on the reverse proxy, so each connection was to the same node.

Good luck! :+1: