arch
December 18, 2023, 12:17pm
1
I want run and test vaultwarden/server:1.30.1 on my k8s cluster from private Gitlab reg. When i run my Vaultwarden I receive error message database is locked. Where can bee fault? Vaultwarden data is stored on Nfs server. Accesses from k8s cluster to nfs server is granted.
Logs from my Vaultwarden pod.
kubeclt logs pod id [vaultwarden-765ddb6b67-4wjtw]
[2023-12-18 11:47:00.299][panic][ERROR] thread ‘main’ panicked at ‘Error running migrations: DatabaseError(Unknown, “database is locked”)’: src/db/mod.rs:452
0: vaultwarden::init_logging::{{closure}}
1: std::panicking::rust_panic_with_hook
2: std::panicking::begin_panic_handler::{{closure}}
3: std::sys_common::backtrace::__rust_end_short_backtrace
4: rust_begin_unwind
5: core::panicking::panic_fmt
6: core::result::unwrap_failed
7: vaultwarden::db::sqlite_migrations::run_migrations
8: vaultwarden::db::DbPool::from_config
9: vaultwarden::main::{{closure}}
10: vaultwarden::main
11: std::sys_common::backtrace::__rust_begin_short_backtrace
12: std::rt::lang_start::{{closure}}
13: std::rt::lang_start_internal
14: main
15:
16: __libc_start_main
17: _start
arch
December 18, 2023, 12:17pm
2
My Vaultwarden deployment, YAML file
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaultwarden
namespace: vaultwarden
spec:
replicas: 1
selector:
matchLabels:
app: vaultwarden
template:
metadata:
labels:
app: vaultwarden
spec:
containers:
- name: vaultwarden
image: gitlab............./vaultwarden-server:1.30.1
ports:
- containerPort: 80
env:
- name: ADMIN_TOKEN
value: "mySuperScreetPassword"
- name: ENABLE_DB_WAL
value: "false"
volumeMounts:
- mountPath: /data
name: vw-data
imagePullSecrets:
- name: my-privte-gitlab-access-key
volumes:
- name: vw-data
persistentVolumeClaim:
claimName: vaultwarden-pv-claim
My PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: vaultwarden-persistent-storage
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 3Gi
nfs:
path: ...
server: IpAdressOfNfs
Also my PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vaultwarden-pv-claim
namespace: vaultwarden
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
volumeName: vaultwarden-persistent-storage
arch
December 18, 2023, 12:25pm
3
My Pod description
k describe pods vaultwarden-765ddb6b67-4wjtw
Name: vaultwarden-765ddb6b67-4wjtw
Namespace: vaultwarden
Priority: 0
Service Account: default
Node: nodeX
Start Time: Mon, 18 Dec 2023 13:44:08 +0200
Labels: app=vaultwarden
pod-template-hash=765ddb6b67
Annotations: cni.projectcalico.org/containerID: f5a2dfb92d2a6517b83ea2fc62d83ddc783079969c25e451f784a4a81676282c
cni.projectcalico.org/podIP: ip
cni.projectcalico.org/podIPs: ip
Status: Running
IP: ip
IPs:
IP: ip
Controlled By: ReplicaSet/vaultwarden-765ddb6b67
Containers:
vaultwarden:
Container ID: docker://e8bdd69969b3a7c838fb7c7ae81d44068a5d36153bbdf2fd460396126c2a93b5
Image: myprivategitalb
Image ID: docker-pullable://myprivategitalbiamgeid
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 101
Started: Mon, 18 Dec 2023 14:07:35 +0200
Finished: Mon, 18 Dec 2023 14:07:48 +0200
Ready: False
Restart Count: 9
Environment:
ADMIN_TOKEN: $argon2id$v=19$m=65540,t=3,p=4$ejIwcGpzbFVQZk9CT3FEcXczRXEvNnJoTnVHQnp4T0J0WjZHb3pJdk1aZz0$3wre9WQoeMicvdRg6UysiaAnmIpb3qlvy99UdlIUP3E
ENABLE_DB_WAL: false
Mounts:
/data from vw-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxqt7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
vw-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vaultwarden-pv-claim
ReadOnly: false
kube-api-access-lxqt7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned vaultwarden/vaultwarden-765ddb6b67-4wjtw to nodex
Normal Pulling 25m kubelet Pulling image "myprivategitalb"
Normal Pulled 25m kubelet Successfully pulled image "myprivategitalb" in 5.876476969s (5.876493272s including waiting)
Normal Created 23m (x5 over 25m) kubelet Created container vaultwarden
Normal Started 23m (x5 over 25m) kubelet Started container vaultwarden
Normal Pulled 23m (x4 over 25m) kubelet Container image "myprivategitalb" already present on machine
Warning BackOff 45s (x106 over 25m) kubelet Back-off restarting failed container
I think you need to configure the pvc access to be ReadWriteOnce
.
There file is still in use by an other pod it looks like.
arch
December 19, 2023, 2:04pm
5
Hi, PVs and PVCs must have matching access modes to be compatible.
The just make sure the pod is stopped/killed before the update takes place.
Using the Recreate strategy should solve your issue then.
---
apiVersion: apps/v1
kind: Deployment
spec:
strategy:
type: Recreate
Otherwise, i would suggest to use a different database backend like MariaDB or PostgreSQL.