My Vaultwarden randmly "died" and doesnt work properly anymore

Hey all,

i dont know what happend exactly, but somehow my Vaultwarden is having A LOT of problems.

Trying to access it from outside of local intranet gives this screen: (Test for urself)

image

When Im trying to access it via the same domain from the local intranet, its stuck here:

When Im trying to access it through the IP, i get a bit further and Im at the login screen:

Since its not HTTPS via the IP, i cant login or do anything else ofc.

What I tried:

  • Create new Volume

  • New Docker Container with old version vaultwarden/server:1.26.0 and new volume

  • Rebooted, Restarted my Raspberry 10x, no fix. Restarted my router, no fix.

  • Modified my Config files to make them work better, no difference.

  • Double-checked if Ports are exposed through my Router, they are.

These are my 2 Config files:

  1. /etc/nginx/sites-enabled/bitwarden.conf
server {
    if ($host = bitwarden.furrkan.de) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80;
    listen [::]:80;
    server_name bitwarden.furrkan.de;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name bitwarden.furrkan.de;

    ssl_certificate /etc/letsencrypt/live/bitwarden.furrkan.de/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/bitwarden.furrkan.de/privkey.pem; # managed by Certbot
    ssl_dhparam /etc/ssl/certs/dhparam.pem;

    client_max_body_size 128M;
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";

    location / {
        proxy_connect_timeout 15;
        proxy_read_timeout 15;
        proxy_send_timeout 15;
        proxy_intercept_errors off;
        proxy_http_version 1.1;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://localhost:8080;
    }
    location /notifications/hub {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $http_connection;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://localhost:3012;
    }

    location /notifications/hub/negotiate {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://localhost:8080;
    }

    error_page 403 404 500 502 503 504 @error_page;
	
    location @error_page {
        root /usr/syno/share/nginx;
        rewrite (.*) /error.html break;
        allow all;
    }
}
  1. ./etc/nginx/nginx.conf
#user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##
        #resolver 127.0.0.11;

        sendfile on;
        tcp_nopush on;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Im running Vaultwarden through portainer. I have lots of others dockers running, they work fine:

These are my ENV Variables:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LOG_FILE=/data/vaultwarden.log
TZ=Europe/Berlin
WEBSOCKET_ENABLED=true
LC_ALL=C.UTF-8
DEBIAN_FRONTEND=noninteractive
UDEV=off
ROCKET_PROFILE=release
ROCKET_ADDRESS=0.0.0.0
ROCKET_PORT=80
SIGNUPS_ALLOWED=false

A snippet from the log:


|                           Version 1.26.0                           |

|--------------------------------------------------------------------|

| This is an *unofficial* Bitwarden implementation, DO NOT use the   |

| official channels to report bugs/features, regardless of client.   |

| Send usage/configuration questions or feature requests to:         |

|   https://vaultwarden.discourse.group/                             |

| Report suspected bugs/issues in the software itself at:            |

|   https://github.com/dani-garcia/vaultwarden/issues/new            |

\--------------------------------------------------------------------/

[INFO] No .env file found.

[2023-01-03 02:27:22.537][vaultwarden][INFO] Private key created correctly.

[2023-01-03 02:27:22.537][vaultwarden][INFO] Public key created correctly.

Running migration 20180114171611

Running migration 20180217205753

Running migration 20180427155151

Running migration 20180508161616

Running migration 20180525232323

Running migration 20180601112529

Running migration 20180711181453

Running migration 20180827172114

Running migration 20180910111213

Running migration 20180919144557

Running migration 20181127152651

Running migration 20190526216651

Running migration 20191010083032

Running migration 20191117011009

Running migration 20200313205045

Running migration 20200409235005

Running migration 20200701214531

Running migration 20200802025025

Running migration 20201130224000

Running migration 20201209173101

Running migration 20210311190243

Running migration 20210315163412

Running migration 20210430233251

Running migration 20210511205202

Running migration 20210701203140

Running migration 20210830193501

Running migration 20211024164321

Running migration 20220117234911

Running migration 20220302210038

[2023-01-03 02:27:23.541][vaultwarden::api::notifications][INFO] Starting WebSockets server on 0.0.0.0:3012

[2023-01-03 02:27:23.546][start][INFO] Rocket has launched from http://0.0.0.0:80

[2023-01-03 02:28:33.782][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:33.782][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:33.783][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:33.784][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:38.687][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:38.689][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:38.689][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:38.692][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:39.296][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:39.296][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:39.297][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:39.297][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:40.145][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:40.146][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:40.148][_][WARN] Remote left: channel closed.

[2023-01-03 02:28:40.149][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:03.622][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:03.624][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:03.629][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:03.636][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:04.474][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:04.474][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:04.479][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:04.479][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:05.999][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:06.000][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:06.001][_][WARN] Remote left: channel closed.

[2023-01-03 02:29:06.003][_][WARN] Remote left: channel closed.

[2023-01-03 02:31:23.937][request][INFO] POST /identity/connect/token

[2023-01-03 02:31:23.940][response][INFO] (login) POST /identity/connect/token => 400 Bad Request

[2023-01-03 02:31:54.393][request][INFO] POST /identity/connect/token

[2023-01-03 02:31:54.394][response][INFO] (login) POST /identity/connect/token => 400 Bad Request

[2023-01-03 02:32:00.678][_][WARN] Remote left: channel closed.

[2023-01-03 02:32:00.679][_][WARN] Remote left: channel closed.

[2023-01-03 02:32:09.055][_][WARN] Remote left: channel closed.

[2023-01-03 02:32:09.056][_][WARN] Remote left: channel closed.

[2023-01-03 02:34:14.396][_][ERROR] No matching routes for HEAD /.

[2023-01-03 02:36:54.334][_][WARN] Parameter guard `p: PathBuf` is forwarding: BadStart('.').

[2023-01-03 02:36:54.334][_][ERROR] No matching routes for GET /.git/config.

[2023-01-03 02:36:54.335][_][WARN] Responding with registered (not_found) 404 catcher.

[2023-01-03 02:36:58.317][_][WARN] Remote left: channel closed.

[2023-01-03 02:36:58.319][_][WARN] Remote left: channel closed.

[2023-01-03 02:36:58.323][_][WARN] Remote left: channel closed.

[2023-01-03 02:36:58.324][_][WARN] Remote left: channel closed.

[2023-01-03 02:37:10.891][_][WARN] Remote left: channel closed.

[2023-01-03 02:37:10.897][_][WARN] Remote left: channel closed.

[2023-01-03 02:37:10.898][_][WARN] Remote left: channel closed.

[2023-01-03 02:37:10.898][_][WARN] Remote left: channel closed.

[2023-01-03 02:38:37.888][request][INFO] POST /identity/connect/token

[2023-01-03 02:38:37.889][response][INFO] (login) POST /identity/connect/token => 400 Bad Request

[2023-01-03 02:38:44.594][_][WARN] Remote left: channel closed.

[2023-01-03 02:38:44.595][_][WARN] Remote left: channel closed.

[2023-01-03 02:38:44.597][_][WARN] Remote left: channel closed.

[2023-01-03 02:48:38.048][request][INFO] POST /identity/connect/token

[2023-01-03 02:48:38.049][response][INFO] (login) POST /identity/connect/token => 400 Bad Request

Can anybody help me to identify why im running into these issues? I can upload any logfiles if needed.
Thanks!
Dear regards,
Furkan

Can you try a different browser? Incognito mode? Different computer?

Do you have the Same result?

Forgot to say all that.
Yes I tried Chrome, Edge, Firefox, all in Incognito.
Yes Different Computer, and different VMs even from Work.
All the same results like above described.

1 Like

First i would suggest to update to the latest version of Vaultwarden.
Second, remove the error_page config, that will break the clients.

Further check the nginx logs and double check the wiki on the proxy config Proxy examples · dani-garcia/vaultwarden Wiki · GitHub

Hey Black,
I was running :testing when it all started to break, tried than :latest, and also like in the pics above a 2 months old version.

Ill try check the wiki for the nginx config, will update, thanks!

Hey,
i switched back to the :latest Docker
i switched now to ur nginx conf:

# The `upstream` directives ensure that you have a http/1.1 connection
# This enables the keepalive option and better performance
#
# Define the server IP and ports here.
upstream vaultwarden-default {
  zone vaultwarden-default 64k;
  server 127.0.0.1:8080;
  keepalive 2;
}
upstream vaultwarden-ws {
  zone vaultwarden-ws 64k;
  server 127.0.0.1:3012;
  keepalive 2;
}

# Redirect HTTP to HTTPS
server {
    if ($host = bitwarden.furrkan.de) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80;
    listen [::]:80;
    server_name bitwarden.furrkan.de;
    return 301 https://$host$request_uri;


}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name bitwarden.furrkan.de;

    #Specify SSL Config when needed
    ssl_certificate /etc/letsencrypt/live/bitwarden.furrkan.de/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/bitwarden.furrkan.de/privkey.pem; # managed by Certbot
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/bitwarden.furrkan.de/fullchain.pem;

    client_max_body_size 128M;
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";

    location / {
      proxy_http_version 1.1;
      proxy_set_header "Connection" "";

      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;

      proxy_pass http://bitwarden.furrkan.de:8080;
    }

    location /notifications/hub/negotiate {
      proxy_http_version 1.1;
      proxy_set_header "Connection" "";

      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;

      proxy_pass http://bitwarden.furrkan.de:8080;
    }

    location /notifications/hub {
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";

      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header Forwarded $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;

      proxy_pass http://bitwarden.furrkan.de:3012;
    }

}

If i access https://bitwarden.furrkan.de in my local network where my actual Raspi also it, its working (kinda). It sometimes just bugs out by giving an error, but my vault is accessible and it works.

From external its still stuck here:

image

I can confirm that it was working from external.

When i try to access it from external, this is what the logs output:
image
If it helps.
Thanks

What interesting is @BlackDex:
I can access the admin webinterface without problems from external:

Everything in the Admin Webinterface works fine, but once im trying to go back to my vault aka main page its fucked lol
image

Me confused

I suggest to fix your DOMAIN config, as it is mentioning it is invalid.

And, since you are publicly showing your domain, I just tested it my self, and it seems the javascript files are being cut. Looks like something is buffering the proxied request and only sends part of the javascript files. You can see this if you open the developer console via F12.

Hey,
oh thanks!
I forgot to check it that way.
I analyzed my nginx error log, and i found this (IP is not secret :slight_smile: )

2023/01/03 18:44:59 [crit] 11106#11106: *1670 open() "/var/lib/nginx/proxy/4/03/0000000034" failed (13: Permission denied) while reading upstream, client: 81.89.251.81, server: bitwarden.furrkan.de, request: "GET /app/main.5f8690f5c03a207c390a.js HTTP/2.0", upstream: "http://127.0.0.1:8080/app/main.5f8690f5c03a207c390a.js", host: "bitwarden.furrkan.de", referrer: "https://bitwarden.furrkan.de/"
2023/01/03 18:44:59 [crit] 11106#11106: *1670 open() "/var/lib/nginx/proxy/5/03/0000000035" failed (13: Permission denied) while reading upstream, client: 81.89.251.81, server: bitwarden.furrkan.de, request: "GET /app/vendor.7c30c6e2b5ba56506ea9.js HTTP/2.0", upstream: "http://127.0.0.1:8080/app/vendor.7c30c6e2b5ba56506ea9.js", host: "bitwarden.furrkan.de", referrer: "https://bitwarden.furrkan.de/"
2023/01/03 18:44:59 [crit] 11106#11106: *1670 open() "/var/lib/nginx/proxy/6/03/0000000036" failed (13: Permission denied) while reading upstream, client: 81.89.251.81, server: bitwarden.furrkan.de, request: "GET /app/polyfills.428c25638840333a09ee.js HTTP/2.0", upstream: "http://127.0.0.1:8080/app/polyfills.428c25638840333a09ee.js", host: "bitwarden.furrkan.de", referrer: "https://bitwarden.furrkan.de/"
2023/01/03 18:44:59 [crit] 11106#11106: *1670 open() "/var/lib/nginx/proxy/7/03/0000000037" failed (13: Permission denied) while reading upstream, client: 81.89.251.81, server: bitwarden.furrkan.de, request: "GET /app/main.82096a4e78d5d3f7b01b.css HTTP/2.0", upstream: "http://127.0.0.1:8080/app/main.82096a4e78d5d3f7b01b.css", host: "bitwarden.furrkan.de", referrer: "https://bitwarden.furrkan.de/

"

I dont know if im reading it correctly, but does it mean that my external client where im opening the website, its trying to access the site via http://127.0.0.1:8080 ? Would that explain it?
I dont know whats even causing this…

But this would explain why its working internally, and not externally.
Thanks

FINAL UPDATE:
The issue was related somehow to my nginx installation/configuration.
It wasnt the files content, it was just how the files had permissions.

I purged all of my nginx config and files etc, and it works now https://bitwarden.furrkan.de !

Thanks for all the help! <3

Furkan

@Furkan since the same behavior has been described here Vaultwarden keeps loading might you have run out of disk space? According to web server - nginx closes connection on some pictures - Server Fault this could be another reason how this can happen.

Hey,
disk space was never my problem. I had always like 40GB left on my sd card. I didnt check if the volume of my docker can be limited tho, i dont know if thats possible, but i wouldnt guess that it was the reason.

1 Like

Thanks for checking. :+1:

1 Like

Maybe to reply to this (I specifically registered for this :smiley: ).

I also ran into the same issue you had. For me it started after tinkering with my Nginx settings to pass some additional headers for my API. After that I started getting errors, like:

net::ERR_QUIC_PROTOCOL_ERROR
and
net::ERR_HTTP2_PROTOCOL_ERROR 200

Like you, I also had ran the setup successfully for some time (although I did notice some whacky loading sometimes). But unlike you, I wasn’t quite prepared to throw away all my settings, as I’ve got quite a few live services running (my blog that is only being read by bots :wink: ).

The Nginx logging leaves you quite in the dark: I just received the error 13 forbidden errors. So I dived into the QUIC protocol ones, but that left me none the wiser.

Long story short, what fixed it for me was something I found related to the second error: HTTP2 PROTOCOL ERROR. If you’re running into this, you could try adding either of the following lines:

  gzip off;
  proxy_max_temp_file_size 0;

This fixed it for me.

Docker img file was full, i changed the size of the file on unraid.