A year ago I set up Ubuntu server with 3 ZFS pools on my server, normally I don’t make copies of very large files but today I was making a copy of a ~30GB directory and I saw in rsync that the transfer doesn’t exceed 3mb/s (cp is also very slow).

What is the best file system that “just works”? I’m thinking of migrating everything to ext4

EDIT: I really like the automatic pool recovery feature in ZFS, has saved me from 1 hard drive failure so far

  • taladar@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    XFS has “just worked” for me for a very long time now on a variety of servers and desktop systems.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Use zfs sync instead of rsync. If it’s still slow, it’s probably SMR drives.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Most filesystems should “just work” these days.

    Why are you blaming the filesystem here when you haven’t ruled out other issues yet? If you have a drive failing a new FS won’t help. Check out “smartctl” to see if it reports errors in your drives.

    • Merlin404@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      11 months ago

      That ive learnt the hard way it dosent 😅 have a Ubuntu server with unifi network in it, thats now full in inodes 😅 the positive thing, im forced to learn a lot in Linux 😂

  • Kata1yst@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    11 months ago

    ZFS is a very robust choice for a NAS. Many people, myself included, as well as hundreds of businesses across the globe, have used ZFS at scale for over a decade.

    Attack the problem. Check your system logs, htop, zpool status.

    When was the last time you ran a zpool scrub? Is there a scrub, or other zfs operation in progress? How many snapshots do you have? How much RAM vs disk space? Are you using ZFS deduplication? Compression?

    • Trincapinones@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      I don’t even know what a zpool scrub is lol, do you have some resources to learn more about ZFS? 1TB pool and 2 500GB pools, with 32GB of RAM, No deduplication and LZ4 compression

      • Kata1yst@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Yeah, you should be scrubbing weekly or monthly, depending on how often you are using the data. Scrub basically touches each file and checks the checksums and fixes any errors it finds proactively. Basically preventative maintenance.
        https://manpages.ubuntu.com/manpages/jammy/man8/zpool-scrub.8.html

        Set that up in a cron job and check zpool status periodically.

        No dedup is good. LZ4 compression is good. RAM to disk ratio is generous.

        Check your disk’s sector size and vdev ashift. On modern multi-TB HDDs you generally have a block size of 4k and want ashift=12. This being set improperly can lead to massive write amplification which will hurt throughput.
        https://www.high-availability.com/docs/ZFS-Tuning-Guide/

        How about snapshots? Do you have a bunch of old ones? I highly recommend setting up a snapshot manager to prune snapshots to just a working set (monthly keep 1-2, weekly keep 4, daily keep 6 etc) https://github.com/jimsalterjrs/sanoid

        And to parrot another insightful comment, I also recommend checking the disk health with SMART tests. In ZFS as a drive begins to fail the pool will get much slower as it constantly repairs the errors.

        • BobsAccountant@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 months ago

          Adding on to this:

          These are all great points, but I wanted to share something that I wish I’d known before I spun up my array… The configuration of your array matters a lot. I had originally chosen to use RAIDZ1 as it’s the most efficient with capacity while still offering a little fault tolerance. This was a mistake, but in my defense, the hard data on this really wasn’t distributed until long after I had moved my large (for me) dataset to the array. I really wish I had gone with a Striped Mirror configuration. The benefits are pretty overwhelming:

          • Performance is better than even RAIDZ2, especially as individual disk size increases.
          • Fault tolerance is better as you could have up to 50% of the disks fail, so long as one disk in a mirrored set remains functional.
          • Fault recovery is better. With traditional arrays with distributed chunks, you have to resilver (rebuild) the entire array, requiring more time, costing performance and shortening the life of the unaffected drives.
          • You can stripe mismatched sets of mirrored drives, so long as the mirrored set is identical, without having the array default to the size of the smallest member. This allows you to grow your array more organically, rather than having to replace every drive, one at a time, resilvering after each change.

          Yes, you pay for these gains with less usable space, but platter drives are getting cheaper and cheaper, the trade seems more worth it than ever. Oh and I realize that it wasn’t obvious, but I am still using ZFS to manage the array, just not in a RAIDZn configuration.

          • Trincapinones@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            Thanks for all the help!

            I don’t have any redundancy, my system has an SSD (the one being slow) and 2 500Gb HDDs, in the hdds I only have movies and shows so I don’t care is that goes bad.

            I have a lot of important personal stuff in the SSD but is new (6 months old) from crucial and I trust that because I don’t have the money to spare on another drive (+ electricity bills) and I trust that I’ll only lose 1-2 files if it goes bad because of the ZFS protection

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LVM (Linux) Logical Volume Manager for filesystem mapping
    NAS Network-Attached Storage
    PSU Power Supply Unit
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    5 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

    [Thread #486 for this sub, first seen 5th Feb 2024, 15:05] [FAQ] [Full list] [Contact] [Source code]

  • SayCyberOnceMore@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Where are you copying to / from?

    Duplicating a folder on the same NAS on the same filesystem? Or copying over the network?

    For example, some devices have a really fast file transfer until a buffer files up and then it crawls.

    Rsync might not be the correct tool either if you’re duplicating everything to an empty destination…?

      • SayCyberOnceMore@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Still the same, or has it solved itself?

        If it’s lots of small files, rather than a few large ones? That’ll be the file allocation table and / or journal…

        A few large files? Not sure… something’s getting in the way.

  • Unyieldingly@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    ZFS is by far the best just use TrueNAS, Ubuntu is crap at supporting ZFS, also only set your pool’s VDEV 6-8 wide.

    • Trincapinones@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I was thinking about switching to debian (all that I host is in docker so that’s why), but the weird thing is that it was working perfectly 1 month ago

      • Unyieldingly@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Maybe your HBA is having issues? or a Drive is Failing? have you done a memtest? you may need to do system wide tests, it can even be a PSU failing or a software Bug.

        also TrueNAS is built with Docker they use it heavily something like 106 apps, Debian has good ZFS support, but you will end up doing a lot of unneeded work using Debian unless you keep it simple.

    • Moonrise2473@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      From the article it looks like zfs is the perfect file system for smr drives as it would try to cache random writes

      • PedanticPanda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Possibly, with tuning. Op would just have to be careful about reslivering. In my experience SMR drives really slow down when the CMR buffer is full.