Include sqlite3 in container-image

Hi,

We run vaultwarden in kubernetes with /data being snapshoted every night.
However, this is the live db-files beeing snapshoted, and should preferably be a dump.
To ease with the dumping of the DB’s, it would be nice if sqlite3 was included in the official image. This would make it possible to run a scheduled job in k8s using the same image as vaultwarden itself.

Would including sqlite3 in the image be reasonable?

Alternatively, extend the functionality of the existing backup button in /admin, and make it possible to dump the sqlite3 db on a schedule.

1 Like

You could also create a batch that calls the backup endpoint.
Or, use a sidecar which has a sqlite3 binary and is able to use the same volume.

1 Like

Alternatively, I added the install to the init script which runs on container startup:

Just mount a file at /etc/vaultwarden.sh with the contents of:
apt update && apt install sqlite3 -y

I did this for awhile until realizing I could run kubernetes cronjobs mounting the same volume.

edit: if you decide to go this route, you can view my deployment as an example here

Sorry,

I just saw that you are using kubernetes. You can take a look at my current deployment here: https://github.com/Ryan-McD/gitops-home-cluster/tree/0d7173b25e24efa65434e1b075a1988a220c39b1/cluster/apps/security/vaultwarden

This works because pod anti-affinity forces the cronjob to create the pod on the node that vaultwarden is running on. RWO PVCs are able to be mounted to another pod on the same node.

I would also like sqlite3 to be included in the docker image so I can run:
docker exec vaultwarden sqlite3 data/db.sqlite3 ".backup '/$backup_target/$backup_filename-$(date '+%Y%m%d-%H%M').sqlite3'"

It would make life a lot easier to have this included by default in the image rather than on the host.

If you really want you can install it with either apk or apt your self.
Or, you could trigger the backup command via the admin interface.

I’m thinking of adding some special signals to trigger a backup a different way. Not yet implemented anything in yet though.

Thanks for the replies!

For completeness;
I didn’t want to include another dependency into our k8s deployment of vaultwarden, thus I ended up with creating a k8s CronJob using the same image and mounts as vaultwarden itself. Injecting a backup.sh using a configMap into the cronJob-pod along with the ADMIN_TOKEN-secret as a file. I then replaced the default startup command with backup.sh.

The backup.sh file looks like this;

#!/bin/bash
set -euo pipefail
IFS=$'\n\t'

BACK_DIR=${DATA_FOLDER:=/data}
COOKIE=/tmp/kookie

echo "Fetching cookies"
curl -v --silent -X POST --cookie-jar "${COOKIE}" \
  --form "token=@/path/to/ADMIN_TOKEN" \
  http://bitwarden-service/admin/ > /dev/null

echo "Triggering backup"
curl -v --silent -X POST --cookie "${COOKIE}" \
  http://bitwarden-service/admin/config/backup_db > /dev/null

rm "${COOKIE}" || true

echo "List all backups in '${BACK_DIR}';"
ls -ltrh "${BACK_DIR}"/db_*.sqlite3 || true

echo "Removing old backups;"
find "${BACK_DIR}/" -maxdepth 1 -name "db_*.sqlite3" -mtime +10 -print -delete