I found this its the cheapest 10TB Exos drive on Newegg and looking to buy 4 of them. I will be putting them in my NAS that I use for my media library and pc backups. The price I’m posting this is $130, I’m also looking similar Exos drives that are $250 is there a difference? Should I shell up for the more expensive drives?

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    HA Home Assistant automation software
    ~ High Availability
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage

    3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

    [Thread #383 for this sub, first seen 29th Dec 2023, 10:05] [FAQ] [Full list] [Contact] [Source code]

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 year ago

    It depends. They’re simply the most annoying drives out there because Seagate on their wisdom decided to remove half of the SMART data from reports and they won’t let you change the power settings like other drives. Those drives will never spin down, they’ll even report to the system they’re spun down while in fact they’ll be still running at a lower speed. They also make a LOT of noise.

    • ScreaminOctopus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      I got a set off ebay, Jesus christ they’re loud. I ended up returning them cause I could hear the grinding through my whole house

      • Lem453@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I have 3 14tb exos drives. I have them in a Roswell 4u hotseap chassis. Running unraid.

        It’s nearly inaudible over the very reasonable case fans. No grinding noises. I can hear the heads moving a bit but it’s quite subtle. Not sure why people have such different experiences with these

        • czardestructo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I noticed when they first spin up on boot they do some sub routine and they’re pretty loud and chatty. First time I heard it I was spooked but it worked fine and I just use it for backup so I just moved on. Once it’s on and in normal operation it’s like any other disk I’ve used over the decades. Nothing as loud as an old scsci disk or a quantum fireball.

    • hperrin@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 year ago

      Aren’t they meant to go in data centers? You wouldn’t want a drive in a data center to spin down. That introduces latency in getting the data off of them.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        1 year ago

        That should be a choice of the OS / controller card not of the drive itself. Also what datacenter wants to run drives that don’t report half of the SMART data just because they felt like it?

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          6
          ·
          1 year ago

          Data centers replace drives when they fail and that’s about it. They don’t care much about SMART data.

          • fruitycoder@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 year ago

            We used to use smart data to predict when to order new drives and on really bad looking days increase our redundancy. Nothing like getting a bad series of drives for PB of data to make you paranoid I guess.

            • lemmyvore@feddit.nl
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              What kind of attributes did you find relevant? I imagine the 19x codes…

              I’ve read the Blackblaze statistics and I’m using a tool (Scrutiny) that takes those stats into account for computing failure probability, but at the end of the day the most reliable tell is when a drive gets kicked out of an array (and/or can’t pass the long smart test anymore).

              Meanwhile, I have drives with “lesser” attributes sitting on warning values (like command timeout) and ofc I monitor them and have good drives on standby, but they still seem to chug along fine for now.

    • czardestructo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      I have an Exos x16 and x18 drive and they both spin down fine in Debian using hdparm. I use them for cold storage and they’re perfectly adequate.

        • czardestructo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s really boring, Debian 12: /dev/disk/by-uuid/8f041da5-6f7a-4ff5-befa-2d3cc61a382c { spindown_time = 241 write_cache = off }

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Tried that and doesn’t seem to work. :(

            Relevant documentation for others about -S / spindown_time:

            Values from 1 to 240 specify multiples of 5 seconds, yielding timeouts from 5 seconds to 20 minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, yielding timeouts from 30 minutes to 5.5 hours. A value of 252 signifies a timeout of 21 minutes.

  • Thermal_shocked@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Hell of a deal. i started using refurb drives, still 5 year warranty, because I was going through so many. Sometimes you get them half off.

  • ninjan@lemmy.mildgrim.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    It’s just the cheapest type of drive there is. The use case is in large scale RAIDs where one disk failing isn’t a big issue. They tend to have decent warranty but under heavy load they’re not expected to last multiple years. Personally I use drives like this but I make sure to have them in a RAID and with backup, anything else would be foolish. Do also note that expensive NAS drives aren’t guaranteed to last either so a RAID is always recommended.

      • ninjan@lemmy.mildgrim.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        For sure higher but still not high, we’re talking single digit percentage failed drives per year with a massive sample size. TCO (total cost of ownership) might still come out ahead for Seagate being that they are many times quite a bit cheaper. Still drives failures are a part of the bargain when you’re running your own NAS so plan for it no matter what drive you end up buying. Which means have cash on hand to buy a new one so you can get up to full integrity as fast as possible. (Best is of course to always have a spare on hand but that isn’t feasible for a lot of us.).

      • vithigar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        That tracks with my experience as well. Literally every single Seagate drive I’ve owned has died, while I have decade old WDs that are still trucking along with zero errors. I decided a while back that I was never touching Seagate again.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I actually had my first WD failure this past month, a 10tb drive I shucked from an easystore years ago (and a couple moves ago). My Synology dropped the disk and I’ve replaced it, and the other 3 in the NAS bought around the same time are chugging away like champs.

      • RunningInRVA@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        1 year ago

        Make that RAID Z2 my friend. One disk of redundancy is simply not enough. If a disk fails while resilvering, which can and does happen, then your entire array is lost.

        • Atemu@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          You must be running an icredible HA software stack for uptime increases so far behind the decimal to matter.

        • SexyVetra@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          Hard agree. Regret only using Z1 for my own NAS. Nothings gone wrong yet 🤞but we’ve had to replace all the drives once so far which has led to some buttock clenching.

          When I upgrade, I will not be making the same mistake. (Instead I’ll find shiny new mistakes to make)

          • Archer@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Instead I’ll find shiny new mistakes to make

            This should be the community slogan