I am building a NAS in RAID 1 (Mirror) mode. Should I buy 2 of the same drive from the same manufacturer? or does it not matter so much?

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    You absolutely can. Of course you’ll only be able to use as much capacity as the smallest disk. Sometime ago I was running a secondary mirror with one 8TB disk and 3 disks pretending to be the other 8TB disk. They were 4TB, 3TB and a 1TB - trivial with LVM. Worked without a hitch for a few years till I replaced the three gnomes in a trench coat with another 8TB disk. Obviously that’s suboptimal but it works fine under certain loads.

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    As long as they’re mostly the same. For example on many controllers no mixing SSDs with HDDs.

  • Kaldo@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    I always thought you’re supposed to buy similar drives so the performance is better for some reason (I guess the same logic as when picking RAM?) but this thread is changing my mind, I guess it doesn’t matter after all👀

      • ErwinLottemann@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        that’s also what we did in the early 2000s when building servers. today i don’t think it realy matters. i haven’t had a failed drive for about 10 years and only needed to swap them out because of the capacity…

        • Valmond@lemmy.mindoki.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I actually thought about that quite a bit, back in the day hard drives were made of sugar-glass. Remember the desk-star? Hrm, the death star. Do anything? It breaks. Do nothing? 15% fail rate anyways (or so I remember).

          Today I have a 3TB + 2TB (one backs up mostly the other) drives in my NAS (WD, black maybeee) and I think they are 10y plus … I’m not using it as a real backup but I still think I should switch out one. But then again, the Synology is so old too…

          I’ve heard about that newer Linux file system, “M” or “L” something where you just add drives and it sorts stuff out itself, maybe I should check that out…

    • Still@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      ram matters because the CPU will use the worse speeds and worse timings of all the sticks, drive reads and rights are buffered so it doesn’t really matter

  • wazzupdog@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    1 year ago

    If you haven’t looked into it, and if you already have the disks of varying capacity, check out JBOD. You will have to configure a system for backups however as you wont have parity like raid1

      • wazzupdog@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        I’m aware, but raid 1 is mirroring which is redundancy, a jbod offers no redundancy so a backup would be even more crucial to protecting from data loss. Also i never said raid is a backup.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Can’t you just format jbod with zfs or some other raid solution? I’m sure it depends on hardware but it shouldn’t be rocket science

            • myofficialaccount@feddit.de
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              Don’t know myself as I have no use case for that setup, but it is a well known setup since several years. If teh performance was bad it wouldn’t be recommended as an alternative as often.

  • vegivamp@feddit.nl
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    1 year ago

    Quite the opposite. Use drives from as many different manufacturers as you can, especially when buying them at the same time. You want to avoid similar lifecycles and similar potential fabrication defects as much as possible, because those things increase the likelihood that they will fall close to each other - particularly with the stress of rebuilding the first one that failed.

    • duncesplayed@lemmy.one
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      edit-2
      1 year ago

      To the best of my knowledge, this “drives from the same batch fail at around the same time” folk wisdom has never been demonstrated in statistical studies. But, I mean, mixing drive models is certainly not going to do any harm.

      • SupraMario@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I know it’s only what I’ve experienced but I’ve been on a 2 weeks of hell from emc drives failing at the same time because dell didn’t change up serials. Had 20 raid drives all start failing within a few days of each other and all were consecutive serials.

      • Hopfgeist@feddit.de
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        mixing drive models is certainly not going to do any harm

        It may, performance-wise, but usually not enough to matter for a small self-hosting servers.

        • TheWoozy@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          I wouldn’t mix 5400 rpm drives with 7200 rpm drives, but if the rpm & sizes are the same, there won’t be any measurable performance loss.

      • Overspark@feddit.nl
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        If everything went fine during production you’re probably right. But there have definitely been batches of hard disks with production flaws which caused all drives from that batch to fail in a similar way.

    • empireOfLove@lemmy.one
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      If I had a dollar for every time rebuilding a RAID array after one failed drive caused a second drive failure in the array in less than 24 hours… I’d probably buy groceries for a week.

      • teawrecks@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I don’t know if you’re talking about the sample of cases you’ve personally witnessed, or the population of all NASes in the world. If the former, that sounds significant. If the latter, it sounds like it’s probably not something to worry about.

        • empireOfLove@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Yup. Same age, same design, same failures… and array rebuilds are super intense workloads that often force a lot of random reads and run the drive at 100% load for many hours.

        • teawrecks@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’ve heard just in general. The resilvering process is hard on all the remaining drives for an extended period of time.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            So you’re saying I should be running RAIDz2 instead of RAIDz1? You’re probably right. 😂

            • teawrecks@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I made that switch a few years ago for that reason.

              That said, as the saying goes, RAID is not a backup, it should never be the thing that stands between you having and losing all your data. RAID is effectively just one really dependable hard drive, but it’s still a single point of failure.

              • Avid Amoeba@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                So you’re saying I should be running JBOD with backups instead of RAIDz1? You’re probably right. 🤭

                • teawrecks@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  As long as you’re ok with it being way less dependable, and having to rebuild it from scratch more often 😉.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage

    3 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

    [Thread #328 for this sub, first seen 2nd Dec 2023, 20:35] [FAQ] [Full list] [Contact] [Source code]