Need help configuring HTTPS and reverse proxy

Hi all !
First of all thanks for the wonderfull work on this BitwardenRS version! Love it so much!

I installed BitwardenRS a few months ago on a Raspberry Pi, and go it working just fine, but only on a web browser or with the Firefox plugin on localhost.
But I realized I was running outdated containers, and also missing lots of very usefull features, one of them being to use an external folder to have data saved outside docker container…
So a few days ago I decided to export my database from the app itself and start from scratch again.

Eventually I ended up with a fully up-to-date instance, with data saved correctly, but I’m still missing the security part of it. Also the Android application and the Windows applicattion are not able to connect, throwing respectively “Trust anchor for certification path not found” and “Failed to fetch” errors…

Following this https://github.com/dani-garcia/bitwarden_rs/wiki/Enabling-HTTPS I first used ROCKET_TLS to handle the certificates, which were generated following this https://github.com/dani-garcia/bitwarden_rs/wiki/Private-CA-and-self-signed-certs-that-work-with-Chrome
This is where I begin to be lost… As my certificates/(reverse) proxies/security stuff knowledge is almost null, I don’t understand what is what, who should do what, and what should be where…
So all my certificates files created above are under /ssl/ folder on the system, which is mapped to the container with “-v /ssl/:/ssl/” option.
Without running a reverse proxy I should be able to run that way and use the Android app right?
I tried to check my certification chain with https://comodosslstore.com/ssltools/ssl-checker.php but it tells me that “No SSL certificates were found on xxxxxxx:yyyy”…
I saw here and there that I should install the certificate on my Android device, but since I’m not able to see the certification chain with this tool, I’m not sure it will change anything… I often hear about “full chain of certification”, I don’t understand what it stands for…

Secondly I want to understand how to configure a reverse proxy. Tell me if I’m correct : a reverse proxy will listen to the port I want to use in the end and redirect the traffic to the standard port (80) of my BitwardenRS server right?
As I’m running BitwardenRS on a RaspberryPi, which runs only that application, I would go for the nginx solution (I know it a bit as I already deployed some local websites with it) running on the same device.
So I try to sum up the steps to go for it:

  • change my container options :
    • remove “-e ROCKET_TLS=blablabla” option
    • remove “-v /ssl/:/ssl/” option
    • change port option to listen on port 80 only
  • install nginx on the system
  • create a new configuration based on what we can see here https://github.com/dani-garcia/bitwarden_rs/wiki/Proxy-examples and set the correct ports I want to use
  • run it
  • try to connect with Android app
    Correct ?

Thanks a lot for any help on the two points (I know, they are very close to each other but they are two different points for me :stuck_out_tongue:) !

Have a safe and happy new year!
Brice

A few comments to your questions

You did the right thing to save your DB and put it back on a docker container with a mounted volume. This is one of the key aspects of docker: you do not care about the container (and trust the authors to maintain it, including security) and it must not have anything that you can lose. You should keep the image up to date (I use Watchtower for that).

In a default configuration, your docker container (and instantiated image) is on an internal (docker) network (by default, form memory, this is the172.something network but it does not matter). You do not have access to that network from your shell (I oversimplify but that’s the idea) so in order for a container to be visible on your normal network, you need to expose some of its ports (by mapping the host port (that you choose) to the container port (that is usually fixed by the container)).

One of the important aspects is that the containers see each other, and they can call each other by their container name (this is why you do not care about their IP)

A typical approach for web apps in containers is to have a reverse proxy that will dispatch the calls it gets to the relevant containers (though the internal docker network, calling them by name). For this, the URL of your app points to that web server (it will know, from the call, where to forward the request).

I strongly recommend to use caddy (https://caddyserver.com/) as the web server to proxify your calls. Other will say nginx or apache but we all know how wrong they are (this is a joke - both are great web servers, I just love caddy for ist simplicity and HTTPS handling).

You would then start a container with caddy, expose the 80 and 443 port for caddy and configure caddy to forward calls to relevant containers. The configuration would be simple:

https://bitwarden.your.domain {
    proxy bitwarden:80
}

This means that when you call https://bitwarden.your.domain, caddy understands that the actual call needs to go to the container called bitwarden.

One fantastic feature of caddy (of many, many others) is that it handles HTTPS by default with (almost) no configuration.

  • It will call Let’s Encrypt and ensure that the certificates are always up to date.
  • When it sees the “s” in https://bitwarden.your.domain it fires its internal processes to get a certificate for that name.
  • If you use http://something it will just handle HTTP.
  • If you have https://bitwarden.your.domain and type as the URL http://bitwarden.your.domain, it will automatically redirect the call to https://bitwarden.your.domain

Really, I cannot recommend caddy enough for this - I have complex docker setups (internal, external) and caddy is so simple for that.

Finally, you should consider docker-compose to manage your docker setup. It makes lfe much easier and you often have a configuration provided by the images owners (or otherwise it is 3 lines of config per container)

I am one of these nginx guys. I personally run my server with bitwarden_rs in docker and nginx outside of docker as a regular install and I just use a proxy pass to the docker container‘s port on localhost within the nginx config

Something like

server {
server_name bitwarden.*;
location / {
proxy_pass http://localhost:1234;
}

}

With 1234 being the example for the docker port and bitwarden.yourdomain as the subdomain.
This is not the entire config but due to the new user link restriction I can’t post it. I tried.

After that I had letsencrypt certbot handle the ssl part. It automatically adjusts the nginx config to point it to your ssl certs and handles the redirect.

Not sure if this is exactly what you are looking for as an answer but it is a fairly simple way to make ssl work in that setup.

Hi !

Thanks a lot to your two for your answers!
WpJ I completely understand the ease of use for another docker container for nginx or the use of caddy, but I’m very more familiar with nginx, and also I may want to install other web applications accessible by this server.

WpJ when you’re talking about “exposing” ports, this is a point where I’m getting lost…
If I start my bitwarden container alone with the -p option, is that what is “exposing” the bitwarden ports ?
That said, do I correctly understand that “-p xxx:yyy” option when launching bitwarden container will route requests from port xxx to yyy docker internal port? What is the behavior if I don’t specify the option?

Then if I want to use nginx outside docker, what should I use in my bitwarden docker setup? Remove the -p option and redirect 443 directly to bitwarden docker on port 80 ?
If I run nginx in this context, default ports of bitwarden docker will still be exposed, so it may be a security issue no?

Sorry I may be messing things but I’m lost and need a bit of clarification xD
Thanks again for your help !

You have two kind of ports in docker: the ones that are naturally eposed by the application (say, 7777, or 80, or 443 or whatever). These are the ports where the application expect to receive data. It is built into the application (usually configurable but that’s not the point).

You can have several docker containers that expose the same port, because each of them is liked with the IP of that container, within the docker network (172. something). They are not available from within the host (the server that runs the docker engine) but they see each other like in any network. The problem is that you cannot, from the server, get to them as the docker network is not available from the host.

This is why docker provides the ability to expose this container port to the server. This is the -p option, one that maps the container port to the host network. If you say for instance -p 9345:7777, it means that the port 9345 on the host will be directly connected with port 7777 on the container.

This works great except that you have to remember a myriad of ports.

The bigger problem is that you end up with web apps such as http://app1:7367, http://app2:3536etc. - becaise you can have only one that exposes the port 80.

So there comes the proxy.

The proxy nginx in your case (though you should give caddy a try :)) will start (again in your case) as a service on the host. It will then bind to ports 80 and 443 (and other oif you want). Your configuration in nginx will then state: “if nginx gets a call on port 80 for the service app1 (it is in a header of the HTTP call), it will redirect the call to localhost:7367”. Do not forget that you need to start app1 with -p 7367:7367 so that its own port is available to the server. If the app exposes naturally its port 80, you would map -p 7367:80 (or whatever random port you want to have).

This is not very nice because you end up with

  • nginx handing ports 80 and 443
  • a myriad of apps that expose some random ports to the server

It would be much better to have nginx in its own container and then you just need to expose its ports 80 and 443 to the host (-p) and proxy directly on the docker nextork (that nginx is on now). You can even call your containers by name:port.

Let me know if it is still not clear - it takes some time to get used to that.

Yeah this is way better organized now in my head ! Thanks a lot for being available and giving such good explanations.

Does this mean that in this case (nginx being in a container also) I could get rid of the -p option for bitwarden container?

Why?

As a side note, I was referring to “other web applications”, but those would be custom websites developped by myself, so not other containers :wink: This is why it may not be ideal to have nginx in a container also for those applications (although I understand the use to link it with another container like bitwarden).

Other questions to come once those points will be rearranged in my head :stuck_out_tongue:

Yes. You must really see the containers as being on their own network.This network is not avalable from your host. The way to make the port of a container availabe to teh host is bt “mapping” it to one that is free on the host. If two containers expose 80 on the docjker network (they can, these are two separate IPs), only one can have it mapped to the port 80 on the host. the other must be mapped to something else (or both are mapped to something else)

That was just the first part. nginx of course will manage 80 and 443, what I meant is that with your approach you end up with plenty of various ports available on your host to which nginx will reverse proxy. Which is not, say, aesthetic (you should just see the ports you actually want to go to)

I run about 30 docker containers. These are:

  • images I get from Docker Hub (example: Bitwarden, Nextcloud, …)
  • images with my own code that are not web related (example: my monitoring system)
  • images with my own web sites with a backend (example: SSR pages that use a backend to store data for instance. It could be PHP for instance (I suffer just by writing this word (PHP) but this gives an idea))
  • images that host static web pages (SPA, PWA, …)

ALL of this is containers. I have (almost) nothing running on my server outside docker. It makes development orders of magnitude easier, you can go for nice CI/CD.

At some point I was hosting my static sites on my caddy docker. But I got rid of it and each site has its own caddy in the container, serving the content. Is this a loss of resources? An image takes 20 MB, I do not even feel the CPU load it on my several years old 600€ machine.

EDIT: just for the fun of it, I checked the load right now:

~ # uptime
 12:56:21 up 110 days, 20:39,  1 user,  load average: 0.51, 0.49, 0.50

Just try to tell yourself "I will put everything in a docker container (the ones you get and the ones you build) and you will see how easier everything becomes. You have ONE 80/443 port managed by your dockerized nginx, which then dispatches the calls though the docker network. You never see the ports of the containers.

You then have some containers that are non-web based (say, MQTT) that will expose their port on the host because you actually need to reach them directly.

Thanks a lot again. Things are getting clearer and clearer thanks to you! :muscle:

Let’s say I’m too stubborn and will stick to nginx being outside a container for the moment:

  • I run my bitwarden docker container with port 80 exposed with -p option to somechosen port on the host
  • I run nginx on the side and redirect port 443 of the host to the one exposed above

The other thing is that as I want to access my bitwarden instance outside home, I’m using a dynamic DNS (from NoIp), which something like http://mydns.ddns.net.
So today I’ve configured my router to redirect requests from a chosen port to the port exposed by bitwarden container.
Does this till fit the configuration? Do I just need to change the port my router will redirect to?
Eventually, why I want to stick to common nginx deployment is that I don’t know well enough docker to manage all this at the moment… Maybe I’ll do it later on because I’m being more and more interested in docker, but unfortunately I don’t have time enough to handle this right now.

Now comes the SSL part…
I’ve generated all the certificates and so on as described on the bitwardenrs wiki page. The files are standing in a folder of my system, being shared with the bitwarden container by the option “-v /path/to/ssl/:/ssl/”.
If I then want to switch to a configuration with reverse proxy, I guess I can also get rid of this option for bitwarden container and handle trust certification in the proxy right?
How to do that? Is it the thing with Let’s Encrypt?

If you decide to reverse proxy it is for a reason. You could still directly connect to the “whatever” port on your host and and get to BW. Similarly, you could redirect your external traffic to that port.

You reverse proxy for many reasons, including:

  • a well-organized system where one front gets the traffic (nginx in your case) and then sends it back to http://something:yourwhateverport so that you do not have to remember the latter
  • terminate the TLS traffic and manage the renewal of the certificates

I think that the first part is now clear for you

As for the second - I have not used the BW built-in TLS management (to be frank I did not even know there was one - though it is required in some cases so it obviously has it built-in as well). I use my proxy (nginx in your case) to handle that for all of my containers - and retrieve the TLS certificates from Let’s Encrypt. In the case of caddy this is a built-in procedure so for nginx you need to look at certbot or some similar thing.

The only reason I got is for me to have the Windows and Android application to work with my setup!
Currently I’m running only the bitwarden docker container alone, mapping its port 80 to something on the host, and a redirection on my router to access it from the outside.
But both applications are still throwing errors on the trust certification…

Windows application reports “Failed to fetch”.

Android application shows “Exception message: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found”.

I’ve been said that may be fixed by using a reverse proxy, because bitwarden(rs?) does not support ROCKET_TLS very well.

I believe that the issue is with the provider of the certificate (is it a self-signed one?)
Using a reverse proxy can automate the retrieval of recognized certificates (via Let’s Encrypt) and then you would have the proxy terminating the TLS tunnel, and forward over HTTP to the container.

I do not know nginx much, but if you search for nginx+let’s encrypt there are probably good tutorials on how to get the certs.

Yes I guess the certificates are self-signed ones, I followed this : Private CA and self signed certs that work with Chrome · dani-garcia/bitwarden_rs Wiki · GitHub

What you are asking nginx outside of docker is basically exactly what I was describing. You just setup nginx to point to your docker container as a reverse proxy. Seems like you managed that.
As for android or these apps they want https. I agree with WpJ it seems like there is a problem with the ssl certificate. If you have already setup your domain and you can reach everything on port 80 you could just create certificates using the letsencrypt certbot. It is super easy. It will automatically create the certificates and adjust your nginx config for you. All you need is certbot installed and an nginx config file in sites-enabled or the conf.d directory that is named the same as your Domain Name.

For Ubuntu this is what you would have to do

sudo apt install certbot

If you use Nginx, then you also need to install the Certbot Nginx plugin.

sudo apt install python3-certbot-nginx

Next, run the following command to obtain and install TLS certificate.

sudo certbot --nginx --agree-tos --redirect --hsts --staple-ocsp --email youremail@yourdomain.com -d bitwarden.yourdomain.com

Basically substituting your email and your domain name. That is it. Certbot will take care of the rest as long as it will find a config file for bitwarden.yourdomain.com in /etc/nginx/conf.d or /etc/nginx/sites-enabled (I am assuming you know nginx directories and config structures etc.)

Assuming you have nginx, letsencrypt and docker + docker compose installed, I can give you my docker compose file as well as my nginx config file if you can’t make it work.
Like I said I only actually run bitwarden_rs in docker and nginx outside of it docker as a regular install on a Ubuntu VPS.

WpJ seems way more knowledgeable about docker than I am. I am still learning myself. But I have been „messing“ a lot with nginx lately so I hope I can be of any help.

Yes this is basically with your explanation combined with a discussion with a friend of mine who knows well nginx configuration also that I came to this conclusion! Thanks for that!

I managed to understand it, not to do it actually :stuck_out_tongue:

It seems so yes… When I try to validate my certificate chain with some online tools, I’m being said that not certificates could be found at the location… While my router redirects requests to my bitwarden docker container, which has the certificates available… :thinking:

Hmmm…? This is another thing you both mentionned, what does “setting up my domain” means ? :yum: I have a dyn dns from NoIp, pointing to my box fixed IP address… which I use from the outside to target applications hosted at home. Is that it?
Then when you talk about “bitwarden.yourdomain.com”, what could it be in my case? As my dyn dns is “briceparmentier.ddns.net”, which could be used to access different applications, should I have a nginx configuration file called “briceparmentier.ddns.net/bitwarden” for example?

Thanks a lot for your proposal @Antergosgeek , I’ll keep it in case I’m really stuck :+1:

PS: currently I’m stuck with some stupid error maybe, unable to install nginx properly (no nginx folder in /etc/)… :frowning_face:

I would love to see your docker compose file and nginx config file (even though I am using Apache). I can get to Bitwarden through port 5080 but my reverse proxy fails. I have another reverse proxy on a VPS instance that works with no problems. I’m baffled.

This is my docker-compose.yml:

version: ‘3’
services:
bitwarden_rs:
image: bitwardenrs/server:latest
container_name: bitwarden_rs
restart: unless-stopped
volumes:
- ~/docker/bitwarden:/data
ports:
- ‘127.0.0.1:34712:3012’
- ‘127.0.0.1:1234:80’
environment:
SIGNUPS_ALLOWED: ‘false’
WEBSOCKET_ENABLED: ‘true’
LOG_FILE: /data/logs/bitwarden.log
EXTENDED_LOGGING: ‘true’
LOG_LEVEL: ‘error’

And then here is my Nginx Config:

Location /etc/nginx/conf.d/bitwarden.mydomain
(I use conf.d instead of sites-enabled)

First the server and proxy:

server {
listen 80;
server_name bitwarden.*;
location / {
proxy_pass http://localhost:1234;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /notifications/hub {
proxy_pass http://localhost:3012;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
}
location /notifications/hub/negotiate {
proxy_pass http://localhost:1234;
}

… then the ssl certificates and the redirect from letsencrypt certbot to 443 to make https work.

listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/bitwarden.mydomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/bitwarden.mydomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
add_header Strict-Transport-Security “max-age=31536000” always; # managed by Certbot
ssl_trusted_certificate /etc/letsencrypt/live/bitwarden.mydomain.com/chain.pem; # managed by Certbot
ssl_stapling on; # managed by Certbot
ssl_stapling_verify on; # managed by Certbot
}
server {
if ($host = bitwarden.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot

Hope this helps. So far It runs perfect for me.

Got to obviously make sure that if you have any kind of firewall that you open your ports accordingly.

I have windows, linux, Mac and iOS Bitwarden clients and they all can connect to it.

2 Likes

Many thanks! Unfortunately I get the same result using your docker compose file… So it seems pretty likely that it is an Apache configuration problem, which seems weird given that I have a reverse proxy working fine on my main VPS instance. I may just have to try it there and see.

It was definitely an Apache misconfiguration – I set everything up in docker on my regular VPS and it worked.

Thank you for the help anyway!

Not a problem. Glad you could figure it out.