So just to get some content going on Lemmy, and get contributing here, thought I’d write a bit about going to 6.12 RC with a ZFS pool and what I’ve done on my server to try and make use of that newfound ability…

Original configuration (pre 6.12):

  • 17 unRAID array drives in XFS format
  • dual parity
  • 2 NVMEs (cache and appdata are separate) in XFS format
  • XFS formatted
  • Backed-up daily with rsync to a second unRAID server on my LAN.

New configuration 6.12 (currently RC8)

  • 13 unRAID array drives in XFS format
  • dual parity
  • 4 x 8TB drives in a ZFS raidz1 pool
  • 2 NVMEs (cache and appdata are separate) in ZFS format with compression enabled.
  • Backed-up hourly with ZFS snapshots

Why the change?

  • Going to ZFS for my “important data”, which is to say, personal documents, family photos (yay babies!)
  • Enables snapshots to help aid in the event of a “soft” data error (file being accidentally deleted, overwritten, or maliciously damaged by software, etc, bitrot, etc). Also enables extremely quick replications to my backup server.
  • Faster access to those personal documents with data striped across 4 drives.
  • Keeping main array as unRAID array drives for “easily replaceable data” (mostly media files, linux ISOs, etc.) so I can expand it easily by chucking another drive in my server or up-sizing an older drive easily.

Enhanced backups through ZFS:

  • ZFS has some rather remarkable options for data backups that are enabled by the snapshot capability of the filesystem. Rather than sending individual files across the network and having to laboriously calculate the differences between each file on the dataset (part of the ZFS volume), you can essentially just send the “difference” between snapshots which can stream between servers in a very short time (usually only a couple of seconds in my case).

This means I have my system continually backed-up on an hourly basis, with saved snapshots every hour, and every day/month for half a year.

Plugins in use

The current unraid RC8 supports ZFS pools, however GUI support for managing ZFS pools is lacking. I’m using the following plugins and tools to accomplish everything (available through App installs):

  • ZFS Master for Unraid, makes most ZFS operations a GUI interaction rather than terminal. I’ve heard rumblings that unRAID may acquire/in-house this plugin to add the functionality to the GUI. It would be worthwhile.
  • Sanoid, automatically handles ZFS snapshots, as well as rotating snapshots based on the number of required snapshots per month and/or day. Enables sending ZFS snapshots to a backup server and rotating those snapshots as well to ensure continuity of data. Requires a bit of config file editing by hand to make it work, and setting-up a cron script but nothing difficult (it’s well-documented) and was about 5 min to set up successfully.

Backup thoughts

RAID (of any type) is not backup. That said, I have part of the “3-2-1” backup strategy automatically enabled here, with my main server backing up the “important stuff” to a separate backup server also running unRAID. That covers having 2 copies of my data on separate devices, however it does not cover keeping one copy off-site as well.

I do have a removable drive in my backup system (currently in XFS format) that’s mounted through unassigned devices that I will insert and sync my ZFS pools to twice a year, then go and put in a safe deposit box off-site to ensure it’s reliably protected. I currently use XFS for this as it’s easy to just plug into any system and get at my files. ZFS is still not as well supported on Windows and Mac systems, but I may go there in the future.

  • MentallyExhausted@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    Snapshots sound really cool. I just got my server back online after a move so I’ve been hesitant to update and rebuild pools, but this might be the motivation I needed.

    • Nogami@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 years ago

      The other interesting thing with snapshots is that you have a few different ways of utilizing them.

      Reverting changes

      The simplest is rolling all changes back, so if your filesystem got totally hosed (say, by ransomware), you can revert the changes back to where they were undamaged and it’s like it never happened at all (hopefully after getting rid of the ransomware). This means that all changes since the snapshot you revert to are discarded like they never existed.

      Accessing Snapshots Directly

      Say it’s not ransomware, but an important file was deleted, but you only discovered days or months afterwards, and you don’t want to undo everything you’ve done past that file deletion and lose important new data, you can access the snapshot directly in read-only mode and recover your files.

      To do so, you enter your filesystem and use a hidden directory, so in my case, my dataset is “/mnt/zfsarchive/Documents”.

      By adding “.zfs/snapshots” on the end of the path, I can access the hidden snapshots directory and see snapshots in read-only mode and recover my data (you can make the hidden directory visible with a configuration option if necessary, but probably best to leave it hidden most of the time).

      “cd /mnt/zfsarchive/Documents/.zfs/snapshots/autosnap_2023-06-13_23:59:01_daily/”

      Snapshot Clones

      You can also take a snapshot and make a fully read/write duplicate of it to experiment with.

      Ssay you are using software to automatically rename and reorganize thousands of files, but you don’t want to mess with doing it “live” in case something goes bad.

      You can make a clone of a snapshot that you can “modify” however you want for testing purposes. Then if everything goes well, you can “promote” the clone to be the new active filesystem at no risk, or just delete it with no consequence if it goes badly.

      Sending snapshots to another (backup) filesystem

      This is what I do for backups using the sanoid plugin. When the system creates a snapshot, it records the difference in the filesystem between points in time, then it can send that difference to another filesystem. For example, if I have 500,000 family photos and I decide to delete one bad photo that was out of focus, a traditional backup would need to compare the 500,000 photos on the source and backup destination to find what changed, then delete the file on the destination.

      With a ZFS snapshot, it sends a tiny chunk of data that just says “file xyz123.jpg was deleted”, and that’s all it takes to have the backup replicated to the destination. By the same token, if I didn’t delete the file, but just edited it to remove the photobomber in the background, the snapshot would just contain the difference between the source and destination image, maybe a few hundred K, and send nearly instantly.

      I’m sure there are many more options, but these are the first ones I learned.

    • Nogami@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Before I started experimenting, I had everything well backed-up in a couple of places, but even with that, I never thought my data was in any danger when experimenting. It was all very safe, though I was learning some new lingo.

      I also kept a copy of my ZFS data on my main array (I have the happy fortune to have room to spare right now), so nothing was ever really at risk.