My backup strategy

Recently I’ve had to re-engineer my whole backup strategy because of a looming hardware issue on my old backup server. Since I am operating servers at Hetzner, I purchased some backup space there and based my new system around their available system. The new strategy consists of several key components:

  • Mounting a remote directory via sshfs
  • Ensuring the remote mountpoint is always available using autofs
  • Creating a loopback filesystem on the remote storage
  • Incremental rsync snapshots using my Hactar backup script

I will explain all the necessary steps to set up this system on a Debian Linux system (my current system at the time of writing runs on version 7.8).

Install and configure autofs & sshfs

Both software packages are available through Debian’s apt package management system. Install them via:

apt-get install sshfs autofs

First make sure you can connect to your remote server via SSH without a password. To achieve this for the Hetzner backup space read the article on their Wiki: Backup Space SSH Keys. For any other SSH accessible host you can refer to my (rather old) explanation here on this blog: SSH public key authentication.

Create the directory you would later like to mount your SSH filesystem to. In my case I chose /mnt/sshfs.

Once you can reach your remote system without the need to input a password, proceed to configure autofs. Edit the file named /etc/auto.master and add the following line:

/mnt/sshfs /etc/auto.sshfs uid=0,gid=0,--timeout=30,--ghost

Next create the file /etc/auto.sshfs with the following contents:

my-backup-space -fstype=fuse,rw,nodev,nonempty,allow_other,reconnect,max_read=65536 :sshfs\#myuser@example.com\:/

Let’s break down each of the fields in this command.

  • my-backup-space – This will be the name of the directory in /mnt/sshfs where your remote directory will be available just as if it where a local mountpoint.
  • -fstype=fuse,rw,nodev,noempty,allow_other, reconnect, max_read=65536 – These are the same parameters you would put in /etc/fstab. The parameters allow_other, reconnect and max_read are specific to sshfs. Read more details in the sshfs manpage.
  • :sshfs\#myuser@example.com\:/ – This instructs autofs to use sshfs to mount this directory. The hash symbol (#) must be escaped or everything following behind it would be treated as a comment. Next are username and hostname or IP address for the machine where your backup will reside, separated by an @ symbol. The last colon must be escaped as well and is followed by the path on the remote system that should be mounted into your local directory (“/” in this example).

Next reload autofs using service autofs reload (yes we are still in pre-systemd times currently on Debian 7.8). Now each time you access the specified directory (e.g. /mnt/sshfs/my-backup-space) autofs will ensure that everything is available via sshfs without any interaction from your side.

Sparse image file

To be able to actually backup to the remote filesystem and retain all permissions, it is necessary to create a filesystem image on the backup host, which we will later mount and use as our actual backup target.

dd if=/dev/zero of=/mnt/sshfs/my-backup-space/backup.img bs=1 seek=500G count=1
mkfs.ext4 /mnt/sshfs/my-backup-space/backup.img

The seek parameter will create a sparse image on the remote directory, meaning it won’t actually fill every bit of space with zeroes which could take a very long time over a network connection.

Now to also automatically mount this loopback device add the following line to /etc/auto.master

/mnt/auto /etc/auto.img --timeout=30

Next create the file /etc/auto.img with the following contents

/backup -fstype=ext4,defaults,sync,dirsync,commit=1,loop :/mnt/sshfs/my-backup-space/backup.img

This will mount your filesystem image as /backup on your local machine. With those steps behind us, we can start to actually save files and create our backup strategy.

Incremental rsync snapshots

Previously my backup “strategy” just consisted of a nightly rsync run via cron using the command rsync -avz --files-from=rsync.includes --delete / user@example.com:/backup/hostname, which is nice but it just gives you a clone of the directories designated in rsync.includes without any data retention. This time around, I wanted to be a bit more sophisticated and have a backup solution with the following features:

  • Backup all of the files, only excluding a few defined directories that don’t need restoration in case of system restore
  • Keep old backups for a defined number of days
  • Have everything available in one neat script that takes a few arguments for configuration

With those points in mind I wrote Hactar, an incremental daily rsync backup script. What it does will be outlined below.

On my server Hactar lives in /usr/local/bin/hactar so it is available in my path, but in theory it could be saved anywhere. Just make sure you give it executable permissions using chmod +x hactar. Hactar by default expects an rsync excludes file in /etc/hactar.excludes but you could also alter this path in the script itself or pass it on the commandline with -e /full/path/to/excludes.file. This is my excludes file with what I think are reasonable defaults for a webserver running Debian Linux:

/backup
/bin
/boot
/dev
/lib
/lib64
/lost+found
/media
/mnt
/proc
/run
/sbin
/selinux
/sys
/tmp
/var/cache
/var/lock
/var/run
/var/tmp
/var/swap.img
/var/lib/mysql

The basic usage of the script is hactar [OPTIONS] SOURCE TARGET, where source and target can be local or remote directories using the familiar rsync syntax.

Here are a few example calls to hactar:

 hactar -v -e my_exludes / /backup
 hactar example.com:/home /backup
 hactar /var/www example.com:/mnt/backup
 hactar -r 7 /var/log /backup/logs

What the script does, is create a new subdirectory in the specified target directory that is named after the current day in the form yyyy-mm-dd (e.g. 2015-01-01). On the first run Hactar will rsync everything (minus excluded directories) to this subdirectory and you will have a full snapshot of the system you want to back up. On the next run (on the next day), the script will create another new subdirectory parallel to the previous and use the previous day’s subdirectory as a hardlink target for the current day. This means rsync will only transfer files that are new or changed, into the new subdirectory and hardlink everything else to the previous day. That way this system only uses as much space as the new and changed files add to the backup but still the current directory will include a full backup snapshot.

The default retain time is 29 days. This means Hactar will delete the oldest subdirectory after 30 days when it’s time for its daily (nightly) run, after it created today’s backup. Since we hardlinked all files, there won’t be any problems with files that where linked to the now deleted snapshot. Hardlinked files still remain on disk as long as there is at least one link pointing to them.

With the script and the excludes file in place I just make sure I can access each server I want to back up via passwordless SSH login and add another line to my crontab.

0 1 * * * /usr/local/bin/hactar / /backup/localhost > /dev/null
0 2 * * * /usr/local/bin/hactar example.com:/var/www /backup/webserver > /dev/null

Those two example lines will backup everything from localhost and all website files from a remote system into my locally mounted backup space at 1 a.m. and 2 a.m. respectively. Using > /dev/null after each cronjob command will ensure no script output will clutter your syslog but will send you an email in case anything unusual happens during the nightly backup run.

Conclusion

I am always open for comments, questions and critique on this system, just use the form below. Also, I am very happy for any pull requests on the Hactar GitHub Repository to further improve the script’s usability and squash any bugs that evaded me thus far.

Some of you might have spotted that my default excludes file has the line /var/lib/mysql. That’s because I save all databases using mysqldump separately twice per day and don’t want the redundancy of having the data files in my backup as well. Since this method could potentially cause integrity issues when used on a running MySQL server (although I have thankfully never experienced any in the +10 years I have used it), I am very open to suggestions about better database backup solutions. Ideally I would like to do this via LVM snapshots, but none of my current servers has  its databases on LVM volumes at the moment, so I am looking for alternatives with minimal downtime and maximal integrity reliability.

11 comments

  1. Hi, habe versucht, das auf unserem Hetzner-Root-Server unter Debian Squeeze so zu installieren.

    1. Zunächst mal funktionierte schon das Mounten unter /mnt/sshfs nicht.
    Das habe ich hinbekommen nach folgender Korrektur und Neustart des Servers:
    /etc/autofs.master muss heißen /etc/auto.master (service autofs reload reichte nicht)

    2. mkfs.ext4 /mnt/sshfs/my-backup-space/backup.img bringt folgende Meldung:
    mke2fs 1.41.12 (17-May-2010)
    /mnt/sshfs/my-backup-space/backup.img is not a block special device.
    Proceed anyway? (y,n) y

    Nach y werden 4000 Blöcke angelegt, aber das Mounten des Images unter /backup funktioniert nicht – auch nicht nach erneutem Server-Neustart. Was kann da noch falsch sein?

    1. Please stick to english on this blog, so more people can benefit.

      Thanks for pointing out my filename typo, of course it is ‘auto.master’ and not ‘autofs.master’.
      Restarting the autofs service should definitely work, there is no need for a server restart anywhere in this whole process. I cannot determine what’s exactly going wrong on your setup because you only told us the “symptoms” and that it “does not work” but not what kind of error messages you are getting.
      Have a look through all relevant logfiles and I’m sure you will figure out what’s happening.

  2. if I understand autofs right, the line
    / /etc/auto.img –timeout=30 in auto.master
    backup -fstype=ext4,defaults,loop :/mnt/sshfs/my-backup-space/backup.img
    in file /etc/auto.img should auto mount a mount point called /backup – correct?

    That is not happening – also not with service autofs reload

    So /backup is not existing for hactar to use it. There is no error message in /var/log/messages.

    autofs is telling me “Reloading automount maps…done.”
    but just not mounting this device.

  3. As pointed by @Werner Noske, I am also facing the same issue.

    /backup is not getting mounted. Can you please suggest some points?

    Note:
    1. Selinux is not enabled.
    2. sshfs is working fine as i am able to see my img file as sshfs location.
    3. autofs-5.0.5-113.el6.x86_64

    Here are the errors which is available in my /var/log/messages.

    Sep 9 14:13:45 del automount[30160]: handle_mounts: mount of / failed!
    Sep 9 14:13:45 del automount[30160]: master_do_mount: failed to startup mount
    Sep 9 14:52:03 del automount[3027]: lookup_read_master: lookup(nisplus): couldn’t locate nis+ table auto.master
    Sep 9 14:52:03 del automount[3027]: do_mount_autofs_indirect: failed to create ioctl fd for autofs path /
    Sep 9 14:52:06 del automount[3027]: handle_mounts: mount of / failed!
    Sep 9 14:52:06 del automount[3027]: master_do_mount: failed to startup mount
    Sep 9 14:53:23 del automount[3214]: lookup_read_master: lookup(nisplus): couldn’t locate nis+ table auto.master
    Sep 9 14:53:23 del automount[3214]: do_mount_autofs_indirect: failed to create ioctl fd for autofs path /
    Sep 9 14:53:26 del automount[3214]: handle_mounts: mount of / failed!
    Sep 9 14:53:26 del automount[3214]: master_do_mount: failed to startup mount
    Sep 9 11:43:18 del automount[1471]: lookup_read_master: lookup(nisplus): couldn’t locate nis+ table auto.master
    Sep 9 11:43:18 del automount[1471]: do_mount_autofs_indirect: failed to create ioctl fd for autofs path /
    Sep 9 11:43:19 del automount[1471]: handle_mounts: mount of / failed!
    Sep 9 11:43:19 del automount[1471]: master_do_mount: failed to startup mount
    Sep 9 12:16:39 del automount[11397]: lookup_read_master: lookup(nisplus): couldn’t locate nis+ table auto.master
    Sep 9 12:18:32 del automount[11726]: lookup_read_master: lookup(nisplus): couldn’t locate nis+ table auto.master
    Sep 9 12:18:32 del automount[11726]: do_mount_autofs_indirect: failed to create ioctl fd for autofs path /
    Sep 9 12:18:40 del automount[11726]: handle_mounts: mount of / failed!
    Sep 9 12:18:40 del automount[11726]: master_do_mount: failed to startup mount

  4. When stopping autofs you’ll get this msg: “umount_autofs_indirect: ask umount returned busy”
    I guess, that autofs is trying to unmount sshfs before the loopback file.

  5. @Heiko & Tapan Kumar Thapa

    You need to make sure all mountpoints (e.g. /backup) are actually directories that exist on your disk. Also if you get a read-only file system on your loop device it might be that the sshfs connection timed out. So I’d suggest implementing a cronjob that periodically accesses the sshfs directory, so autofs’ reconnect fires.

    For example: */5 * * * * /usr/bin/touch /mnt/sshfs/my-backup-space/ping

  6. thanks so much for sharing this with us. for me everything is working nicely, excapt that the backup image wouldn’t automount unless putting:

    /mnt/auto /etc/auto.img –timeout=30,–ghost

    without adding –ghost it wasn’t mounting the image form me.

  7. I was trying a similar setup, but did not use autofs at first. And I’m still not sure if it is needed… well at least I don’t have any experience with it and the documentation I’ve found so fare looks quite inconsistent.
    Anyway, I ran into some problems with the imagefile on sshfs. As soon as I mount the image file, the following kernel errors show up:

    Mar 08 14:41:43 [131215.092833] EXT4-fs (loop1): mounted filesystem with ordered data mode. Opts: (null)
    Mar 08 14:41:47 [131218.505314] blk_update_request: I/O error, dev loop1, sector 88092928

    An many more sectors following… It seems, this is caused by the fact that the imagefile is stored on an sshfs. When I mount the imagefile from the local filesystem, it’s just fine. Operating directly on the sshfs also works without incidents. It just wouldn’t allow hard links.

    It feels quite odd since I did the same thing as you did, as well as severall other people who wrote about it. So it shouldn’t be a big deal.
    What os version and kernel are you running on?
    I’m using the vServer image from Hetzner with Ubuntu 14.04.4 LTS with /proc/version Linux version 4.2.0-27-generic (buildd@lcy01-23) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ) #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 UTC 2016

  8. for me the line in /etc/auto.master for the image need to look like:

    /- /etc/auto.img –timeout=300 –ghost

    This is on Ubuntu LTS 16.04

    thank you for the write up, that has easy up my process and ideas how to backup.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.