• Anarch157a@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I already did a few months ago. My setup was a mess, everything tacked on the host OS, some stuff installed directly, others as docker, firewall was just a bunch of hand-written iptables rules…

    I got a newer motherboard and CPU to replace my ageing i5-2500K, so I decided to start from scratch.

    First order of business: Something to manage VMs and containers. Second: a decent firewall. Third: One app, one container.

    I ended up with:

    • Proxmox as VM and container manager
    • OPNSense as firewall. Server has 3 network cards (1 built-in, 2 on PCIe slots), the 2 add-ons are passed through to OPNSense, the built in is for managing Proxmox and for the containers .
    • A whole bunch of LXC containers running all sorts of stuff.

    Things look a lot more professional and clean, and it’s all much easier to manage.

      • Anarch157a@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Can’t say anything about CUDA because I don’t have Nvidia cards nor do I work with AI stuff, but I was able to pass the built-in GPU on my Ryzen 2600G to the Jellyfin container so it could do hardware transcoding of videos.

        You need the drivers for the GPU installed on the host OS, then link the devices on /dev to the container. For AMD this is easy, bc the drivers are open source and included in the distro (Proxmox is Debian based), for Nvidia you’d have to deal with the proprietary stuff both on the host and on the containers.

      • oken735@yukistorm.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yes, you can pass through any GPU to containers pretty easily, and if you are starting with a new VM you can also pass through easily there, but if you are trying to use an existing VM you can run into problems.

  • DilipaEli@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    To be honest, nothing. Running my home server on a nuc with proxmox and a 8 bay synology Nas (though I’m glad that I went with 8 bay back then!).
    As a router I have opnsense running on a low powered mini pc.

    All in all I couldn’t wish for more (low power, high performance, easy to maintain) for my use case, but I’ll soon need some storage and ram upgrade on the proxmox server.

  • alteredEnvoy@feddit.ch
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Get a more powerful but quieter device. My 10th gen NUC is loud and sluggish when a mobile client connects.

  • thejevans@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    My current homelab is running on a single Dell R720xd with 12x6TB SAS HDDs. I have ESXi as the hypervisor with a pfsense gateway and a trueNAS core vm. It’s compact, has lots of redundancy, can run everything I want and more, has IPMI, and ECC RAM. Great, right?

    Well, it sucks back about 300w at idle, sounds like a jet engine all the time, and having everything on one machine is fragile as hell.

    Not to mention the Aruba Networks switch and Eaton UPS that are also loud.

    I had to beg my dad to let it live at his house because no matter what I did: custom fan curves, better c-state management, a custom enclosure with sound isolation and ducting, I could not dump heat fast enough to make it quiet and it was driving me mad.

    I’m in the process of doing it better. I’m going to build a small NAS using consumer hardware and big, quiet fans, I have a fanless N6005 box as a gateway, and I’m going to convert my old gaming machine to a hypervisor using proxmox, with each VM managed with either docker-compose, Ansible, or nixOS.

    …and I’m now documenting everything.

    • Wingy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve had an R710 at the foot of my bed for the past 4 years and only decommissioned it a couple of months ago. I haven’t configured anything but I don’t really notice the noise. I can tell that it’s there but only when I listen for it. Different people are bothered by different sounds maybe?

      • thejevans@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I had an r710 before the r720xd. The r710 was totally fine, the r720xd is crazy loud.

      • MangoPenguin@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        That’s crazy to me! I had an R710 and that thing was so loud. I could hear it across the house.

        For me if I can hear it at all when sitting near it in a quiet room, it’s a no-go.

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The only real pain point I have is my hard drive layout. I’ve got a bunch of different drive sizes that are hard to expand on without wasting space or spending a ton.

    • nhoad@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Depending on your comfort level and setup, you could use LVM. Then the differently sized hard drives wouldn’t be such a problem.

      Or if you want a much more complex situation, you could set up Ceph. It will also give you redundancy, but it’s a really steep learning curve.

          • traches@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I’m on btrfs. I have a 14 TB, a 16TB, and two 7TB drives in RAID1. I’m running out of space for all my linux ISOs and I’d really like to transition to some sort of 3 or 4:1 parity raid, but you’re not supposed to use that and I don’t see a clear path to a ZFS pool or something

  • aucubin@lemmy.aucubin.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Getting a better rack. My 60cm deep rack with a bunch of rack shelves and no cable management is not very pretty and moving servers around is pretty hard.

    Hardwarewise I’m mostly fine with it, although I would use a platform with IPMI instead of AM4 for my hypervisor.

  • ThorrJo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Go with used & refurb business PCs right out of the gate instead of fucking around with SBCs like the Pi.

    Go with “1-liter” aka Ultra Small Form Factor right away instead of starting with SFF. (I don’t have a permanent residence at the moment so this makes sense for me)

    • constantokra@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Ah, but now you have a stack of PiS to screw around with, separate from all the stuff you actually use.

  • Showroom7561@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Instead of a 4-bay NAS, I would have gone with a 6-bay.

    You only realize just how expensive it is to expand on your space when you have to REPLACE HDDs rather than simply adding more.

      • Showroom7561@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve been pretty happy with my Synology NAS. Literally trouble-free, worry-free, and “just works”. My only real complaint is them getting rid of features in the Photos app, which is why I’m still on their old OS.

        But I’d probably build a second NAS on the cheap, just to see how it compares :)

        What OS would you go with if you had to build one?

        • Luke@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m happy with synology too for the most part. But I like a bit more flexibility I’d probably build one and use truenas or unraid.

        • Luke@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’ve got the argon one v2 with a m2 drive. Works well haven’t tested speeds. Not used as a nas though.

    • billm@lemmy.oursphere.space
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Yes, but you’ll be wishing you had 8 bays when you fill the 6 :) At some point, you have to replace disks to really increase space, don’t make your RAID volumes consist of more disks than you can reasonably afford to replace at one time. Second lesson, if you have spare drive bays, use them as part of your upgrade strategy, not as additional storage. Started this last iteration with 6x3tb drives in a raidz2 vdev, opted to add another 6x3tb vdev instead of biting the bullet and upgrading. To add more storage I need to replace 6 drives. Instead I built a second NAS to backup the primary and am pulling all 12 disks and dropping back to 6. If/when I increase storage, I’ll drop 6 new ones in and MOVE the data instead of adding capacity.

  • constantokra@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I built a compact nas. While it’s enough for the drives I need, even for upgrades, I only have 1 pcie x4 slot. Which is becoming a bit limiting. I didn’t think i’d have a need for for either a tape drive or a graphics card, and I have some things I want to do that require both. Well, I can only do one unless I get a different motherboard and case. Which means i’m basically doing a new build and I don’t want to do either of the projects I had in mind enough to bother with that.

  • rarkgrames@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I have things scattered around different machines (a hangover from my previous network configuration that was running off two separate routers) so I’d probably look to have everything on one machine.

    Also I kind of rushed setting up my Dell server and I never really paid any attention to how it was set up for RAID. I also currently have everything running on separate VMs rather than in containers.

    I may at some point copy the important stuff off my server and set it up from scratch.

    I may also move from using a load balancer to manage incoming connections to doing it via Cloudflare Tunnels.

    The thing is there’s always something to tinker with and I’ve learnt a lot building my little home lab. There’s always something new to play around with and learn.

    Is my setup optimal? Hell no. Does it work? Yep. 🙂

  • Meow.tar.gz@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    That’s a pretty good question: Since I am new-ish to the self-hosting realm, I don’t think I would have replaced my consumer router with the Dell OptiPlex 7050 that I decided on. Of course this does make things very secure considering my router is powered by OpenBSD. Originally, I was just participating in DN42 which is one giant VPN semi-mesh network. Out of that hatched the idea to yank stuff out of the cloud. Instead, I would have put the money towards building a dedicated server instead of using my desktop as a server. At the time I didn’t realize how cheap older Xeon processors are. I could have cobbled together a powerhouse multi-core, multi-threaded Proxmox or xcp-ng server for maybe around 500-600 bucks. Oh well, lesson learned.