Since, my doctor recommend that I put more fiber in my diet- I decided to comply.
So… in a few hours, I will be running a few OS2 runs across my house, with 10G LR SFP+ modules.
Both runs will be from my rack to the office. One run will be dedicated for the incoming WAN connection (Coupled with the existing fiber that… I don’t want to re terminate). The other, will be replacing the 10G copper run already in place, to save 10 or 20w of energy.
This, was sparked due to a 10GBase-T module overheating, and becoming very intermittent earlier this week causing a bunch of issues. After replacing the module, links came back up and started working normally… but… yea, I need to replace the 10G copper links.
With only twinax and fiber 10G links plugged into my 8-port aggregation switch, it is only pulling around 5 watts, which is outstanding, given a single 10GBase-T module uses more then that.
Edit,
Also, I ordered the wrong modules. BUT… the hard part of running the fiber is done!
I have a crawlspace, I can go straight through the floor. I also already have a nice tight grommet for routing cables to/from the server closet too, which makes it pretty easy to run more cables.
I will say, one of the really nice things around having a crawlspace, it makes it effortless to run cabling across the house.
I am not very good at doing sheetrock, and especially not good at re-texturing the sheetrock, as such, I typically rely on floor grommets hidden out of sight, and out of mind.
Ahhh gotcha! You should document and post the process. I’d love to see it!
I have quite a bit of it already documented!
Might be worth a read.
Although, will note, the 40G project is quite a bit more interesting then these 10G runs. I did also run 100G a year or so back, but, never posted anything on it, due to a ton of firmware issues on the 100GBe nics.
Interesting blog!
Clicked on your NAS article (one of the first linked ones) and spotted an error… you write that Synology NAS boxes don’t use standard RAID, but they do. They have official docs up on how to hook them up to a standard Linux system for disaster recovery (it’s just Btrfs or ext4 on mdadm RAID).
Probably not super relevant for you or most readers, but just thought I’d point it out :)
Interesting, was not aware of that.
I am going to assume you found the post regarding the 500$ closet NAS I built a few years ago.
One of the driving reasons behind the inclusion of that, was actually taking a jab at drobo units, which after failure… which, while recoverable, takes a decent amount more effort then just plugging the drives in elsewhere.
Yeah Synology is pretty good with that kind of stuff (we use one at work). They’ve really just got a Linux system with custom management tools on top. Of course for DIY purposes, self-building is still cheaper and more flexible though.
I might have to give them another evaluation.
My current issue… is just the amount of energy needed to run this bulk storage array… I need to identify a solution that allows me to have a large number of drives, good performance, AND low energy usage.
The effort of achieving this in the average UK house is so much more than what I am guessing is OPs American house.
Crawlspaces don’t exist and we replaced sheetrock for double skinned solid brick walls.
It also depends on the area too. Generally, the higher quality houses are a solid foundation without a crawlspace.
And- don’t make it sound like you get the short end of the stick! Having a solid house built from double-brick walls sounds fantastic, compared to my house built with 2x4s, which are not even 2"x4". Especially, when we get a ton of very strong wind…
Had a 100mph wind gust recently, knocked off half of the roofs in my town.
___
I lived in a hundred-year-old row home and I feel your pain. I had to rent a hammer drill to run Ethernet to my office, which was draped along the outside of my house.
Did the same. Three story townhouse. To run Cat6 from top to bottom I went out, round the side of the house, up the wall, and back in.