Container exiting

Hi,
I am experimenting with Vaultwarden to see if this could help me and team members manage passwords but I am experiencing a problem and have not many ideas how to debug this.

I am runnning Vaultwarden in a container using podman on an up-to-date Fedora host (VM).
The command to create the container was:

podman run -d --userns=keep-id  --name vault  -e SIGNUPS_ALLOWED=true  -e ROCKET_PORT=8080  -v /home/pete/vaultwarden/data/:/data/:Z  -p 8080:8080 vaultwarden/server:latest

I can reach the web-ui on localhost, create a user, connect a bitwarden client and so on, but the server shuts down unexpectedly all time.

What I found in the logs is the following:

[INFO] No .env file found.
[2022-12-21 10:23:01.409][start][INFO] Rocket has launched from http://0.0.0.0:8080
[2022-12-21 10:23:01.764][vaultwarden][INFO] Exiting vaultwarden!
[2022-12-21 10:23:01.764][rocket::server][WARN] Received SIGTERM. Shutdown already in progress.
[2022-12-21 10:23:01.764][vaultwarden][INFO] Vaultwarden process exited!

If anyone knows of a good guide setting up vaultwarden using podman, I would appreciate any hint. Thanks.

See: Using Podman · dani-garcia/vaultwarden Wiki · GitHub

But that generally describes what you are doing here already.
I just tested it, and seems to be working for me just fine.

Try to set -e LOG_LEVEL=trace and see if that gives some more info.
Also, try what happens without -d or use --it instead?

Or, try to start podman with some more logging via podman --log-level=debug run ...

I have increased feedback by --log-level=debug and -e LOG_LEVEL=trace, which works fine but I can’t observe the unexpected shutdown anymore.

Maybe because I also used -it instead of -d

Maybe I should try verbose output and go back to the -d

I am still experiencing problems with the vaultwarden container exiting suddenly.
I have moved from a F37 vm to a F36 baremetal machine, didn’t make a difference, vaultwarden container is still crashing.

What I see in the logs is:

podman run -it --log-level=debug --userns=keep-id  --name vaultwarden -e LOG_LEVEL=trace -e EXTENDED_LOGGING=true -e LOG_FILE=/data/vaultwarden.log -e SIGNUPS_ALLOWED=true  -e ROCKET_PORT=8080  -v /home/florian/vaultwarden/data/:/data/:Z  -p 8080:8080  vaultwarden/server:latest
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run -it --log-level=debug --userns=keep-id --name vaultwarden -e LOG_LEVEL=trace -e EXTENDED_LOGGING=true -e LOG_FILE=/data/vaultwarden.log -e SIGNUPS_ALLOWED=true -e ROCKET_PORT=8080 -v /home/florian/vaultwarden/data/:/data/:Z -p 8080:8080 vaultwarden/server:latest) 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/florian/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/florian/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/florian/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/florian/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] Cached value indicated that native-diff is usable 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 13             
DEBU[0000] Adding port mapping from 8080 to 8080 length 1 protocol "" 
DEBU[0000] Pulling image vaultwarden/server:latest (policy: missing) 
DEBU[0000] Looking up image "vaultwarden/server:latest" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" 
DEBU[0000] Trying "docker.io/vaultwarden/server:latest" ... 
DEBU[0000] parsed reference into "[overlay@/home/florian/.local/share/containers/storage+/run/user/1000/containers]@ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] Found image "vaultwarden/server:latest" as "docker.io/vaultwarden/server:latest" in local containers storage 
DEBU[0000] Found image "vaultwarden/server:latest" as "docker.io/vaultwarden/server:latest" in local containers storage ([overlay@/home/florian/.local/share/containers/storage+/run/user/1000/containers]@ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654) 
DEBU[0000] exporting opaque data as blob "sha256:ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] Looking up image "docker.io/vaultwarden/server:latest" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "docker.io/vaultwarden/server:latest" ... 
DEBU[0000] parsed reference into "[overlay@/home/florian/.local/share/containers/storage+/run/user/1000/containers]@ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] Found image "docker.io/vaultwarden/server:latest" as "docker.io/vaultwarden/server:latest" in local containers storage 
DEBU[0000] Found image "docker.io/vaultwarden/server:latest" as "docker.io/vaultwarden/server:latest" in local containers storage ([overlay@/home/florian/.local/share/containers/storage+/run/user/1000/containers]@ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654) 
DEBU[0000] exporting opaque data as blob "sha256:ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] User mount /home/florian/vaultwarden/data/:/data/ options [Z] 
DEBU[0000] Looking up image "vaultwarden/server:latest" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "docker.io/vaultwarden/server:latest" ... 
DEBU[0000] parsed reference into "[overlay@/home/florian/.local/share/containers/storage+/run/user/1000/containers]@ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] Found image "vaultwarden/server:latest" as "docker.io/vaultwarden/server:latest" in local containers storage 
DEBU[0000] Found image "vaultwarden/server:latest" as "docker.io/vaultwarden/server:latest" in local containers storage ([overlay@/home/florian/.local/share/containers/storage+/run/user/1000/containers]@ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654) 
DEBU[0000] exporting opaque data as blob "sha256:ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] Inspecting image ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654 
DEBU[0000] exporting opaque data as blob "sha256:ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] exporting opaque data as blob "sha256:ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] Inspecting image ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654 
DEBU[0000] Inspecting image ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654 
DEBU[0000] Inspecting image ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654 
DEBU[0000] Image has volume at "/data"                  
DEBU[0000] Adding anonymous image volume at "/data"     
DEBU[0000] using systemd mode: false                    
DEBU[0000] New container has a health check             
DEBU[0000] setting container name vaultwarden           
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" 
DEBU[0000] Adding mount /proc                           
DEBU[0000] Adding mount /dev                            
DEBU[0000] Adding mount /dev/pts                        
DEBU[0000] Adding mount /dev/mqueue                     
DEBU[0000] Adding mount /sys                            
DEBU[0000] Adding mount /sys/fs/cgroup                  
DEBU[0000] Allocated lock 0 for container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 
DEBU[0000] parsed reference into "[overlay@/home/florian/.local/share/containers/storage+/run/user/1000/containers]@ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] exporting opaque data as blob "sha256:ed62d51c025e15b520f544c9e293086ef0541b411bb5a6af98cdaab9ef136654" 
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported 
DEBU[0000] Check for idmapped mounts support            
DEBU[0000] Created container "e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486" 
DEBU[0000] Container "e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486" has work directory "/home/florian/.local/share/containers/storage/overlay-containers/e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486/userdata" 
DEBU[0000] Container "e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486" has run directory "/run/user/1000/containers/overlay-containers/e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486/userdata" 
DEBU[0000] Handling terminal attach                     
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported 
DEBU[0000] Check for idmapped mounts support            
DEBU[0000] overlay: mount_data=lowerdir=/home/florian/.local/share/containers/storage/overlay/l/WIJRHWXIIFUEC4KW3PFUAJCZ5S:/home/florian/.local/share/containers/storage/overlay/l/WIJRHWXIIFUEC4KW3PFUAJCZ5S/../diff1:/home/florian/.local/share/containers/storage/overlay/l/CDNUVR5KXJGKYYXFPHO26UC4L7:/home/florian/.local/share/containers/storage/overlay/l/77RTYD24RWI6P4MN54AIF72BUJ:/home/florian/.local/share/containers/storage/overlay/l/QIM6G6SHP43ZXZQA2U4ICRGTV3:/home/florian/.local/share/containers/storage/overlay/l/CZOTVJXXRQD7XIALAROOQ7LHWC:/home/florian/.local/share/containers/storage/overlay/l/SN6IYGB4ZQL745DUNXMP7QC4FJ:/home/florian/.local/share/containers/storage/overlay/l/SK4CDHEVRVGYOSL3Y26XXAMVPX,upperdir=/home/florian/.local/share/containers/storage/overlay/e94b1a37d436b22770f6d36c38b0c137e8509025395e789ea2dcf668476437c8/diff,workdir=/home/florian/.local/share/containers/storage/overlay/e94b1a37d436b22770f6d36c38b0c137e8509025395e789ea2dcf668476437c8/work,,userxattr,context="system_u:object_r:container_file_t:s0:c149,c601" 
DEBU[0000] Mounted container "e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486" at "/home/florian/.local/share/containers/storage/overlay/e94b1a37d436b22770f6d36c38b0c137e8509025395e789ea2dcf668476437c8/merged" 
DEBU[0000] Created root filesystem for container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 at /home/florian/.local/share/containers/storage/overlay/e94b1a37d436b22770f6d36c38b0c137e8509025395e789ea2dcf668476437c8/merged 
DEBU[0000] found local resolver, using "/run/systemd/resolve/resolv.conf" to get the nameservers 
DEBU[0000] Modifying container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 /etc/passwd 
DEBU[0000] Modifying container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 /etc/group 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
DEBU[0000] Setting Cgroups for container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 to user.slice:libpod:e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/home/florian/.local/share/containers/storage/overlay/e94b1a37d436b22770f6d36c38b0c137e8509025395e789ea2dcf668476437c8/merged" 
DEBU[0000] Created OCI spec for container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 at /home/florian/.local/share/containers/storage/overlay-containers/e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 -u e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 -r /usr/bin/crun -b /home/florian/.local/share/containers/storage/overlay-containers/e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486/userdata -p /run/user/1000/containers/overlay-containers/e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486/userdata/pidfile -n vaultwarden --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog -t --conmon-pidfile /run/user/1000/containers/overlay-containers/e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/florian/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/florian/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.scope 
DEBU[0000] Received: 35697                              
INFO[0000] Got Conmon PID as 35691                      
DEBU[0000] Created container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 in OCI runtime 
DEBU[0000] creating systemd-transient files: systemd-run [--user --setenv=PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/florian/bin --unit e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 --on-unit-inactive=1m0s --timer-property=AccuracySec=1s /usr/bin/podman healthcheck run e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486] 
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 35697 tap0 
DEBU[0000] rootlessport: time="2022-12-22T09:36:45+01:00" level=info msg="Starting parent driver" 
DEBU[0000] rootlessport: time="2022-12-22T09:36:45+01:00" level=info msg="opaque=map[builtin.readypipepath:/run/user/1000/libpod/tmp/rootlessport1442319823/.bp-ready.pipe builtin.socketpath:/run/user/1000/libpod/tmp/rootlessport1442319823/.bp.sock]" 
DEBU[0000] rootlessport: time="2022-12-22T09:36:45+01:00" level=info msg="Starting child driver in child netns (\"/proc/self/exe\" [rootlessport-child])" 
DEBU[0000] rootlessport: time="2022-12-22T09:36:45+01:00" level=info msg="Waiting for initComplete" 
DEBU[0000] rootlessport: time="2022-12-22T09:36:45+01:00" level=info msg="initComplete is closed; parent and child established the communication channel"
                                                                                                                                                         time="2022-12-22T09:36:45+01:00" level=info msg="Exposing ports [{ 8080 8080 1 tcp}]" 
DEBU[0000] rootlessport: time="2022-12-22T09:36:45+01:00" level=info msg=Ready 
DEBU[0000] rootlessport is ready                        
DEBU[0000] Attaching to container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 
DEBU[0000] Received a resize event: {Width:238 Height:59} 
DEBU[0000] Starting container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 with command [/start.sh] 
DEBU[0000] Started container e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 
DEBU[0000] Notify sent successfully                     
/--------------------------------------------------------------------\
|                        Starting Vaultwarden                        |
|                           Version 1.26.0                           |
|--------------------------------------------------------------------|
| This is an *unofficial* Bitwarden implementation, DO NOT use the   |
| official channels to report bugs/features, regardless of client.   |
| Send usage/configuration questions or feature requests to:         |
|   https://vaultwarden.discourse.group/                             |
| Report suspected bugs/issues in the software itself at:            |
|   https://github.com/dani-garcia/vaultwarden/issues/new            |
\--------------------------------------------------------------------/

[INFO] No .env file found.

DEBU[0000] Enabling signal proxying                     
INFO[0000] Received shutdown.Stop(), terminating!        PID=35677
[2022-12-22 08:36:45.361][mio::poll][TRACE] registering event source with poller: token=Token(0), interests=READABLE
[2022-12-22 08:36:45.366][mio::poll][TRACE] registering event source with poller: token=Token(1), interests=READABLE | WRITABLE
[2022-12-22 08:36:45.367][routes][INFO] Routes loaded:
[2022-12-22 08:36:45.367][routes][INFO] GET    /
[2022-12-22 08:36:45.367][routes][INFO] GET    /<p..> [10]
[2022-12-22 08:36:45.367][routes][INFO] GET    /admin
[2022-12-22 08:36:45.367][routes][INFO] GET    /alive
[2022-12-22 08:36:45.367][routes][INFO] DELETE /api/accounts
[...]
[2022-12-22 08:36:45.372][routes][INFO] GET    /vw_static/<filename>
[2022-12-22 08:36:45.372][start][INFO] Rocket has launched from http://0.0.0.0:8080
[2022-12-22 08:36:45.424][mio::poll][TRACE] registering event source with poller: token=Token(2), interests=READABLE | WRITABLE
[2022-12-22 08:36:45.424][tracing::span][TRACE] parse_headers;
[2022-12-22 08:36:45.424][tracing::span::active][TRACE] -> parse_headers;
[2022-12-22 08:36:45.424][tracing::span::active][TRACE] <- parse_headers;
[2022-12-22 08:36:45.424][tracing::span][TRACE] -- parse_headers;
[2022-12-22 08:36:45.424][request][INFO] GET /alive
[2022-12-22 08:36:45.425][response][INFO] (alive) GET /alive => 200 OK
[2022-12-22 08:36:45.425][tracing::span][TRACE] encode_headers;
[2022-12-22 08:36:45.425][tracing::span::active][TRACE] -> encode_headers;
[2022-12-22 08:36:45.425][tracing::span::active][TRACE] <- encode_headers;
[2022-12-22 08:36:45.425][tracing::span][TRACE] -- encode_headers;
[2022-12-22 08:36:45.425][mio::poll][TRACE] deregistering event source from poller
[2022-12-22 08:36:53.034][rocket::server][WARN] Received SIGTERM. Requesting shutdown.
[2022-12-22 08:36:53.034][vaultwarden][INFO] Exiting vaultwarden!
[2022-12-22 08:36:53.034][mio::poll][TRACE] deregistering event source from poller
[2022-12-22 08:36:53.034][vaultwarden][INFO] Vaultwarden process exited!
[2022-12-22 08:36:53.035][mio::poll][TRACE] deregistering event source from poller
DEBU[0008] Called run.PersistentPostRunE(podman run -it --log-level=debug --userns=keep-id --name vaultwarden -e LOG_LEVEL=trace -e EXTENDED_LOGGING=true -e LOG_FILE=/data/vaultwarden.log -e SIGNUPS_ALLOWED=true -e ROCKET_PORT=8080 -v /home/florian/vaultwarden/data/:/data/:Z -p 8080:8080 vaultwarden/server:latest) 
DEBU[0008] [graphdriver] trying provided driver "overlay" 
DEBU[0008] Cached value indicated that overlay is supported 
DEBU[0008] Cached value indicated that overlay is supported 
DEBU[0008] Cached value indicated that metacopy is not being used 
DEBU[0008] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 

If anyone has any ideas what could be wrong here and how to further debug this issue, please let me know.
Also, let me know which other information is required to help me. thank you.

Here is a part of podman’s system log…

container died

09:46:55 podman: 2022-12-22 09:46:55.877436148 +0100 CET m=+0.220414279 container cleanup e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 (image=docker.io/vaultwarden/server:latest, name=vaultwarden, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=638766b346dc0e00c5db7935d21e48354d632335, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.version=1.26.0, org.opencontainers.image.created=2022-10-14T18:11:03+00:00)
09:46:55 systemd: Stopped e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.timer - /usr/bin/podman healthcheck run e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.
09:46:55 podman: 2022-12-22 09:46:55.728369265 +0100 CET m=+0.251779775 container died e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 (image=docker.io/vaultwarden/server:latest, name=vaultwarden, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=638766b346dc0e00c5db7935d21e48354d632335, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.version=1.26.0, org.opencontainers.image.created=2022-10-14T18:11:03+00:00)
09:46:55 systemd: Started podman-39822.scope.
09:46:55 podman: 2022-12-22 09:46:55.471212218 +0100 CET m=+0.204795449 container exec_died e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 (image=docker.io/vaultwarden/server:latest, name=vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.version=1.26.0, org.opencontainers.image.created=2022-10-14T18:11:03+00:00, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=638766b346dc0e00c5db7935d21e48354d632335, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden)
09:46:55 systemd: Started e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.service - /usr/bin/podman healthcheck run e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.
09:46:55 podman: 2022-12-22 09:46:55.177349709 +0100 CET m=+0.341326361 container init e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 (image=docker.io/vaultwarden/server:latest, name=vaultwarden, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.version=1.26.0, org.opencontainers.image.created=2022-10-14T18:11:03+00:00, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=638766b346dc0e00c5db7935d21e48354d632335)
09:46:55 systemd: Started e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.timer - /usr/bin/podman healthcheck run e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.
09:46:34 podman: vaultwarden
09:46:34 systemd: Stopped e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.timer - /usr/bin/podman healthcheck run e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486.
09:46:34 podman: 2022-12-22 09:46:34.501029315 +0100 CET m=+0.037833734 container died e27302318a231001b21f1f367ef9f5e31b547972b294652abce5a95359ba4486 (image=docker.io/vaultwarden/server:latest, name=vaultwarden, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.version=1.26.0, org.opencontainers.image.created=2022-10-14T18:11:03+00:00, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=638766b346dc0e00c5db7935d21e48354d632335)
09:46:34 systemd: Started podman-39697.scope.
09:46:34 podman: 2022-12-22 09:46:34.352948044 +0100 CET m=+0.144962759 container exec_died
...

I though it could be SELinux-related.
/data volume looks like this:

drwxr-xr-x.  6 florian florian system_u:object_r:container_file_t:s0:c341,c953   4096 22. Dez 10:51 .
drwx--x---+ 63 florian florian unconfined_u:object_r:user_home_dir_t:s0          4096 22. Dez 10:09 ..
drwxr-xr-x.  2 florian florian system_u:object_r:container_file_t:s0:c341,c953   4096 22. Dez 10:12 attachments
-rw-r--r--.  1 florian florian system_u:object_r:container_file_t:s0:c341,c953 196608 22. Dez 10:12 db.sqlite3
-rw-r--r--.  1 florian florian system_u:object_r:container_file_t:s0:c341,c953  32768 22. Dez 10:51 db.sqlite3-shm
-rw-r--r--.  1 florian florian system_u:object_r:container_file_t:s0:c341,c953      0 22. Dez 10:51 db.sqlite3-wal
drwxr-xr-x.  2 florian florian system_u:object_r:container_file_t:s0:c341,c953   4096 22. Dez 10:12 icon_cache
-rw-r--r--.  1 florian florian system_u:object_r:container_file_t:s0:c341,c953   1675 22. Dez 10:12 rsa_key.pem
-rw-r--r--.  1 florian florian system_u:object_r:container_file_t:s0:c341,c953    451 22. Dez 10:12 rsa_key.pub.pem
drwxr-xr-x.  2 florian florian system_u:object_r:container_file_t:s0:c341,c953   4096 22. Dez 10:12 sends
drwxr-xr-x.  2 florian florian system_u:object_r:container_file_t:s0:c341,c953   4096 22. Dez 10:12 tmp

But even after setting setenforce Permissive, the container quits.

sudo setenforce Permissive
  ~  podman run -it --name vaultwarden -v /home/florian/vw-data/:/data/:Z -e ROCKET_PORT=8080 -p 8080:8080 vaultwarden/server:latest
/--------------------------------------------------------------------\
|                        Starting Vaultwarden                        |
|                           Version 1.26.0                           |
|--------------------------------------------------------------------|
| This is an *unofficial* Bitwarden implementation, DO NOT use the   |
| official channels to report bugs/features, regardless of client.   |
| Send usage/configuration questions or feature requests to:         |
|   https://vaultwarden.discourse.group/                             |
| Report suspected bugs/issues in the software itself at:            |
|   https://github.com/dani-garcia/vaultwarden/issues/new            |
\--------------------------------------------------------------------/

[INFO] No .env file found.

[2022-12-22 09:48:29.677][start][INFO] Rocket has launched from http://0.0.0.0:8080
[2022-12-22 09:48:45.303][vaultwarden][INFO] Exiting vaultwarden!
[2022-12-22 09:48:45.304][rocket::server][WARN] Received SIGTERM. Shutdown already in progress.
[2022-12-22 09:48:45.304][vaultwarden][INFO] Vaultwarden process exited!