I’m looking for 16TB HDDs. They’ll be for fairly light usage. Immich will be the heaviest thing running on it.
New? Used? Certified? Like this?
I’ve been running a 12 bay nas for 8 years the only drives I have had fail have been Seagate drives.
I now use WD reds and lately due to their very good price point on eBay Toshiba drives.
Although the Toshiba drives do seem to be slightly noisier so depends where your server is located.
I have a mix of shucked, new, and used drives in my home server.
WD reds out of some USB enclosures that are pushing 7 years old, some new EXOS drives that are pushing 4, and some refurbed EXOS drives that are pushing 2 years now.
Zero issues, but I’m also running them as basically stand-alone drives with mergerfs and snapraid. I don’t really care about 99% of the data, since I can just like, download all the ISOs again, but in x265 encoded versions.
7 drives, zero failures, though I’m expecting the 8tb reds to start dying any minute now.
I’ve used serverpartdeals for 2 Seagate 16TB drives. I had 1 drive start to show signs of premature failure (unusually slow read/write speeds and read errors).
Their support is amazing. They 2nd day Air shipped me a replacement (after asking for advanced replacement) so I could rebuild my array before returning the old drive. No cost to me.
Good service gives me way more confidence in a store front than just positive product reviews. Can’t recommend these guys enough.
They know that if a customer is noticing those signs that they’re savvy enough to pick a different solution if they don’t offer good support
I’ve been buying Water Panther refurbished drives.
Both Arsenal (had them for a while) and SaveGreen (they just released recently) and have any had a good experience with them for how cheap they are with warranty.
Only filed 1 RMA, and the turnaround was fairly quick.
They are also Seagate drives, and the SaveGreens have different firmware
I bought two of those serverpartdeals drives after seeing them recommended on reddit after searching for good refurbished drives and they have been running fine for a few months now with no errors in the smart data
Bought 8 16TB Ultrastars from them. Haven’t had a single issue with them
deleted by creator
Best buy wd elements. On sale every couple of weeks. Crack open the shell and you have a wd red or white.
My second machine uses such disks. They work fine. They’re a bit more expensive than recertified datacenter WDs from SPD though. I can run a 48TB array with 4-disk redundancy with such disks from SPD for the price of the equivalent 48TB array made of shucked external WDs with 2-disk redundancy. The 4-disk redundant system will be more performant. I’m going to use my shucked WD array till it croaks but I’d be buying recertified replacements as the disks die.
I’ve been doing this for at least a decade now and the drives are just as reliable as if you bought them normally. The only downside is having to block one of the pins on the SATA connector with kapton tape for it to work.
Sometimes yes, but I haven’t seen that lately. 5 years ago I had several of those, but I haven’t seen it recently
I think it also depends on the host. I’m running some power-disable disks in my boxes and they didn’t require adapters or tape.
This is a great reference:
https://www.backblaze.com/blog/backblaze-drive-stats-for-2023/Serverpartdeals is a good source for cheaper drives.
I always buy new because time spent fixing a problem or recovering data with a used drive ain’t worth it to me. It may be to you. A manufacturer refurb might be ok, in fact I do buy refurb monitors sometimes, but not data storage.
Sounds a bit like not enough redundancy. Once you go into redundant mode, the individual disk quality is no longer nearly as important. 2 or 3 disk redundancy, and you can use whatever garbage comes your way.
All well and good until you lose another disk 2 days into re-striping. Which is not that uncommon because that puts a lot of load on the surviving disks! Remember, RAID is not a backup.
That’s why the extra redundancy. The probability of 2 or 3 disks failing should be significantly lower than 1 disk failing. I currently run 2-disk redundancy. If 1 disk fails, I’d replace it. If a second disk fails while the replacement is being resilvered, I’d shit a brick, stop the resilver and make an incremental backup to ensure I won’t lose data if another disk fails due to the resilver load. Then I’d proceed with the resilver. RAID is not backup and the extra redundancy is there to reduce the probability to have to spend time restoring backups. Increased redundancy can compensate for individual disk reliability.
That assumes you don’t value your time spent dealing with troubles that come.
Like the other person said, it’s fine if you don’t, but for me it’s worth a little upfront cost to have to deal with less ordering new drives, putting the drive in the server, monitor rebuilding of the array, ect…
None of that is an excuse for lack of proper backups. Because even new drives can fail catastrophically.
I don’t understand how this follows from what I said. 🤔 I called for increasing redundancy to compensate for the increased risk of failure. That’s the purpose of redundancy. Reducing the time spent dealing with troubles. Unless you consider replacing a disk to be a significant time spent. To me it isn’t because it’s fairly trivial in my setup. Perhaps it’s more work in other setups.
Depending on the prices, you may even be able to add significantly more redundancy by using recertified disks, potentially reducing the risk even more than running new drives. E.g. 4-disk redundancy vs 2-disk for the same price. Running a significantly more redundant setup not only decreases the probability of an array failure but it should also reduce the mechanical load each disk experiences over time which should further decrease failure risk.
I’ve been buying used 12tb HGST from Amazon for $80.
There’s a lot of comments talking about used and refurbs. I personally use these types to get good deals but I also have a reasonably robust backup protocol. Not a full 321 backup but an appropriate level of risk for my needs.
My point being, if you go that route, they’re cheaper but the odds that one dies on you might be higher. Make sure you manage your backup strategy to a risk value you’re comfortable with.
That said, I’ve also had great experiences with serverpartdeals. I’ve also used diskprices.com to find deals.
Things to consider are noise, temps, power-on time, etc. For myself, temps are fairly consistent in my case and it’s in a closet so I don’t care about noise. I also don’t need particularly fast access on the HDDs (I use an nvme cache strategy as well) so I can pretty much use whatever. Your needs might differ.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage ZFS Solaris/Linux filesystem focusing on data integrity
3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #874 for this sub, first seen 17th Jul 2024, 04:55] [FAQ] [Full list] [Contact] [Source code]
I’m using a western digital refurb HDD. 14TB.
running 24/7 pretty much since the pandemic. It’s basically my media server.
A WD? I also like living dangerously.gif 😊
I kid of course. It could be absolutely appropriate depending on the data and budget.
It is absolutely a dangerous life lol.
I keep everything backed up. It was a temporary purchase that I fully did not expect to last 5 years.
It’s literally been a clicker since day 1 lol.
No failures. No slow reads. Just a zombie beast.
I bought 5x 16T recertified WD from SPD. Running in RAIDz2 (2-disk redundancy) config since April. I’ve yet to have an issue. They have 3 years manufacturer warranty so it’s not even a huge deal if some die in a while. I paid USD $160 per drive.
What do you use for raid?
ZFS
To add to that. With ZFS raid everything is done with software so their is no hardware lockin.