My backup strategy is two fold:
- I want a frequent and regular backup to easily come back to files or configurations that have accidentally been deleted.
- I want a regular backup on an external hard drive in case my whole computer crashes, is stolen, or anything else of this sort happens and I am unable to recover from it.
- I want those backup to be as efficient as possible both in terms of times and storage ressources.
- Of course, everything needs to be automated and encrypted (especially when it’s on a distant server I don’t own).
That said, here is what I came up with:
- Btrfs snapshots every five minutes when the computer is active
(i.e., user logged in). Keep all of them for one day, then only
keep the latest of the day for a week.
- This allows me to come back easily to a previous point in time when I deleted something stupidly. It has saved my life quite a few times… you know, when on the command line you keep removing stuff and it doesn’t go to trash… well with that, it’s easy: I just copy the file from the snapshot back to my working directory.
- Restic encrypted backup to an s3 bucket every week.
The Tools
Obsiviously for this setup, you need your filesystem (at least your
/home
, since that is what we are backing up right now) to be
formated with btrfs.
Why btrfs? Well because it is built right into the kernel, it’s been tested for quite some time, it is performat, incremental, and CoW (Copy on Write) which prevents any file to be corrupted when being backed up, and you can keep working as you would normally do without worry.
You will also need the amazing restic program.
Less necessary (because you can create your own script), but a very
handy tool I use is the snapper
program made by OpenSUSE which takes
care of automating and scheduling btrfs snapshots for you.
Finally, either cron
or systemd
service, whichever you like or
know how to use.
The Steps
Frequent Regular Local Backups
If you want a detailed step by step guide for snapper, you can either look at the Arch wiki or the OpenSUSE wiki tutorial.
This is quite easy to set up: you download snapper
for your platform
and that’s it.
Then you set up a configuration for your /home
with the following
command :
sudo snapper -c home create-config /path/to/subvolume
Here is my configuration for my /home subvolume :
Key │ Value
─────────────────────────┼─────────
ALLOW_GROUPS │
ALLOW_USERS │ myuser
BACKGROUND_COMPARISON │ yes
EMPTY_PRE_POST_CLEANUP │ yes
EMPTY_PRE_POST_MIN_AGE │ 3600
FREE_LIMIT │ 0.2
FSTYPE │ btrfs
NUMBER_CLEANUP │ yes
NUMBER_LIMIT │ 50
NUMBER_LIMIT_IMPORTANT │ 10
NUMBER_MIN_AGE │ 3600
QGROUP │
SPACE_LIMIT │ 0.5
SUBVOLUME │ /home
SYNC_ACL │ yes
TIMELINE_CLEANUP │ yes
TIMELINE_CREATE │ yes
TIMELINE_LIMIT_DAILY │ 7
TIMELINE_LIMIT_HOURLY │ 5
TIMELINE_LIMIT_MONTHLY │ 0
TIMELINE_LIMIT_QUARTERLY │ 0
TIMELINE_LIMIT_WEEKLY │ 1
TIMELINE_LIMIT_YEARLY │ 0
TIMELINE_MIN_AGE │ 1800
If you want details on those settings, definitely check out the openSUSE documentation: https://doc.opensuse.org/documentation/leap/reference/html/book-reference/cha-snapper.html#sec-snapper-clean-up-timeline
Basically, this makes sure snapper
takes a snapshot of my /home
directory every five minutes and keeps the last 5 hourly snapshots,
the last 7 daily, and the last 1 every week.
Regular Remote Backups
Restic is amazing. It works great, it is fast and reliable. I used to
have a script that take the latest snapper snapshots, usually stored
in /home/user/.snapshots
. For that to work, you need to set the
proper permission for restic to access the ~/.snapshot/
directory, as
it is owned by root (snapper uses root by default).
However, snappper saves snapshots with its corresponding number
(i.e. ~/.snapshots/23/
). When backing up with restic, it will take
the whole path of the snapshot and look for a parent snapshot. If you
take every week the latest snapper snapshot, restic will never find a
parent snapshot as every path for the snapshot will be different
(i.e., the next snapshot will be ~/.snapshot/24/
). The issue is that
restic, not finding a parent snapshot, will read and backup all files
of both snapshot 23 and 24. We completely lose the power of restic:
incremental backup.
Rustic is a fork of restic built with rust. It has options to abitrarily set a remote path for a local path, but this tool is not stable as of now. Thus, I took a workaround with restic.
Instead of taking the last snapper snapshots to backup with restic,
since btrfs snapshots are so cheap, I create a new snapshot with bare
btrfs with a stable path (i.e. /home/.backup/
); backup this snapshot
and then delete this snapshot.
Here is the script I made. I have a few checks to make sure I don’t start backing up randomly. It checks for battery to be either charging or having 50% charge or more. Also, I log everything in a file, as it is done automatically with a cron job, I can check if something bad happened at some point.
#!/bin/bash
#SPDX-FileCopyrightText: 2025 lvgn lvgn@lvgn.xyz
#SPDX-License-Identifier: Apache-2.0
# Set variables
LOG_FILE="/home/user/backup/backup_log.txt"
BACKUP_DIR="/home/.backup"
source /home/user/backup/.restic-env
# Check if battery is charging
check_battery() {
# Get battery status
BATTERY_STATUS=$(cat /sys/class/power_supply/BAT0/status 2>&1)
BATTERY_PERCENTAGE=$(cat /sys/class/power_supply/BAT0/capacity 2>&1)
# Log stdout and stderr to a file
echo "Battery Status: $BATTERY_STATUS" >> "$LOG_FILE"
echo "Battery Percentage: $BATTERY_PERCENTAGE" >> "$LOG_FILE"
# Check if charging or above 50%
if [[ $BATTERY_STATUS == "Charging" || $BATTERY_PERCENTAGE -gt 50 ]]; then
return 0 # Battery conditions met
else
return 1 # Battery conditions not met
fi
}
check_network() {
{
ping -c 1 8.8.8.8
echo "Ping return status: $?"
} >> $LOG_FILE 2>&1
return $?
}
restic_backup() {
{
# Create btrfs snapshot in current directory
sudo btrfs subvolume snapshot /home/ $BACKUP_DIR
# Backup the subvolume
restic -r $RESTIC_REPOSITORY backup \
--exclude-file /home/user/backup/restic-excludes.txt \
--exclude-caches \
--tag home --tag fedora \
$BACKUP_DIR
# Once done, delete the subvolume
sudo btrfs subvolume delete $BACKUP_DIR
} >> $LOG_FILE 2>&1
}
# Run Restic forget with retention policy
restic_forget() {
# Forget snapshots older than one month
{
echo "--------------------$(date) Restic retention policy started---------------"
restic -r $RESTIC_REPOSITORY forget --keep-within 1m --prune
echo "--------------------$(date) Restic retention policy finished--------------"
} >> $LOG_FILE 2>&1
}
{
while true; do
if check_battery && check_network; then
echo "=================Staring backup on $(date)======================="
restic_backup
restic_forget
echo "=================End of backup on $(date)========================"
break
else
echo "Conditions not met for backup. Postponing backup for 2 hours………………" >> $LOG_FILE 2>&1
sleep 30m
fi
done
} >> $LOG_FILE 2>&1
Cron Job
I may try to set up a systemd service in the future instead of cron, but for now, cron is good enough.
Simply enter the following command (you may need to intall cron before):
crontab -e
This will get you where you can edit your cronjobs. Choose when you want your backup to take place and give it the path of your script. Also, I log everything in the same log file.1
45 10 * * thu /home/user/backup/restic-backup.sh >> /home/user/backup/backup_log.txt 2>&1
Proper Permissions
btrfs command requires root privileges. Hence, we need to add the user
running the cron job to a sudoers file in /etc/sudoers.d/restic
for
example with:
<yourusername> ALL=(root) NOPASSWD: /usr/sbin/btrfs
essentially giving it the permissions to run btrfs without being prompted for the sudo password.
That way, you do not need to create a specific user and add it to the
sudo
group, you just need your user to be able to perform btrfs
commands without asking the password.
You may wonder why I log and redirect both in the script and in the cron job. It’s because, sometimes, I use the script manually. When I want to make sur I have a remote backup somewhere for the last few things I did, for instance. ↩︎