Confused with backing up Vaultwarden data folder

Being a linux-noob I have a problem understanding how exactly to proceed backing up my vault.
I use your official image from docker-hub and so I now have a /vw-data directory on my docker host.

In your example for a basic backup command, assuming my data folder ist data you wrote in the wiki:

sqlite3 data/db.sqlite3 ".backup '/path/to/backups/db-$(date '+%Y%m%d-%H%M').sqlite3'"

Do I have to execute this command on the host meaning I have to change the folder into /vw-data or should this command be executed inside the container?

Kindest regards

To my knowledge the container does not ship the sqlite3 binary.

Thank you.

But if I execute the sqlite3 command on the host how can I be shure that I have a consistent backup of the DB running in the docker container?

At the moment this is my only concern, that keeps me from using my Vaultwarden. I have a lot of passwords but want to be safe having a backup which I can restore in case I need it.

The sqlite3 .backup command should take care of locking (as opposed to e.g. just copying the database file), so you should not lose anything.

I use .backup to maintain a replica of my “master” vaultwarden, and it works quite OK. Obviously, if you write a lot to the database (many users constantly changing/adding passwords) there could be a risk of having a backup which is not up-to-date, but at least consistency should not be an issue.

I (think) it worked!

After installing sqlite3 on my host I now have a backup of the database in another directory.

Thanks a lot for helping!

I’d also like to note that the database backup is not the only factor here if you are wanting to restore.

Please see

I would also recommend creating a cronjob to automate your SQLite backup command, this can allow you to have something like say a week of db backups in your other directory if something gets messed up.

You could even couple this backup strategy with something like rsync or rclone to have an off-site backup of your Vaultwarden instance too!
This would allow you to easily restore if you had a system failure of some kind.

Thanks for your post cksapp.

I’ve read the Wiki and I’m aware of the fact that backing up all other files in the directory where the database resides is important.
I have to google around to find the needed linux “magic” I can use in a script and then automate the whole process with a cronjob running this script on a regular basis.
Copying to a SMB fileserver (which is backuped to the cloud) would be the next and final step.

It is helps, what do is the following:

at the server where vaultwarden runs, and in my case as the user “vaultwarden”, a cron job runs every day in the early morning that:

  1. creates a backup of the sqlite3 database and stores it in a specific folder
  2. copies the whole (host-side) data folder (which contains the database, the icons, etc.)
  3. copies the docker environment (.vaultwarden.env) and my script for starting the docker container.

and packs all of that in a .tgz.

Then, from another computer at home (which I use as “NAS”) a cron job grabs the backup (with scp, but rsync would be OK too, though it’s a small file, so scp does the job just fine).

The above is enough for a backup.

In my case, from that backup the .sqlite3 file is extracted and copied to another computer (a raspberry pi) at a specific place.

Another job at the raspberry pi checks at that place if the copied database is newer/different than the previous one, and if so, stops the container, deletes the database files (db.sqlite3-*), places the one from the backup in its place, and starts the container again.

This way the replica (here: raspberry pi) always has the data from the master vaultwarden.

It’s a bit complicated (I also delete old backups to keep only the last N copies), but it works quite good :slight_smile:


Would you kindly post your scripts which you use daily with cronjob?

OK. They may be too specific for my use-case, but hopefully you can make sense of them.

This is what gets run as user vaultwarden on the “master” server:


STAMP=$(date +"%Y%m%d%H%M%S")


cd ${VAULTWARDEN} || {
 echo "cannot cd to ${VAULTWARDEN}"
 exit 1

umask 007

sqlite3 ./data/db.sqlite3 ".backup ${STORE}/snap_${STAMP}_vaultwarden.sqlite3"

tar czf ${STORE}/snap_${STAMP}_vaultwarden.tgz .vaultwarden.env vaultwarden.log data

cd $STORE || {
 echo "cannot cd to ${STORE}"
 exit 1

ls -1tp snap_* 2> /dev/null | tail -n +5 | xargs -I '{}' --no-run-if-empty rm --force -- '{}'

# stable link
ln -f snap_${STAMP}_vaultwarden.sqlite3 snap_vaultwarden.sqlite3
ln -f snap_${STAMP}_vaultwarden.tgz snap_vaultwarden.tgz

Then from my “NAS” snap_vaultwarden.tgz and snap_vaultwarden.sqlite3 are fetched (this is run as a normal user with access to the server and to the replica server).


# copy master (remote) database to raspi4
scp -3 \
  user@server:/srv/snap_vaultwarden/snap_vaultwarden.sqlite3 \

as you see, the db.sqlite3 is placed as db.sqlite3.MASTER in the raspi4.

There, this runs (as root) as a cron job:


cd /data/vaultwarden || {
 echo "$0: cannot cd /data/vaultwarden, exiting..."
 exit 1


if [ -f db.sqlite3.MASTER ]; then
 if cmp db.sqlite3 db.sqlite3.MASTER; then
# nop if files are equal
  rm -f db.sqlite3.MASTER
# files are different, replace existing with MASTER
  echo "$0: new (master) db.sqlite3 found, updating.."

  systemctl is-active vaultwarden 1> /dev/null && {
   echo "$0: vaultwarden was active, stopping.."
   systemctl stop vaultwarden

  rm -f db.sqlite3 db.sqlite3-*
  mv db.sqlite3.MASTER db.sqlite3

  [ "$RESTART" = "yes" ] && {
   echo "$0: restarting vaultwarden.."
   systemctl start vaultwarden

exit 0

As I said, it works fine for me, but of course will have to be tweaked/changed for your use-case.

(and the obvious disclaimer: I take no responsibility if using this destroys your vaultwarden database and deletes all your backups and kills your kitten…)