Hey, I want to dip my feet into self-hosting, but i find the hardware side of things very daunting. I want to self host a Minecraft server (shocking, i know), and i’ve actually done this before both on my own PC and through server hosts. I’d like to run a Plex server as well (Jellyfin is champ now it sounds like? So maybe that instead), but I imagine the Minecraft server is going to be the much more intensive side of things, so if it can handle that, plex/jellyfin will be no issue.

The issue is, I can’t seem to find good resources on the hardware side of building a server. I’m finding it very difficult to “map out” what I need, I don’t want to skimp out and end up with something much less powerful than what I need, but i also don’t want to spend thousands of dollars on something extremely overkill. I looked through the sidebar, but it seems to mostly cover the software side of things. Are there any good resources on this?

  • 12bitmisfit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    9 months ago

    Modded minecraft servers are heavily dependent on single threaded performance. For more vanilla servers paper helps a lot. For forge I highly recommend trying mohist. It isn’t compatible with all forge mods but it works well enough that you can just replace the server jar in many modpacks and see a large performance boost.

    The biggest thing that slows down mc servers in my experience is world gen. Pre generating the world and adding a world border can help a lot.

    I’ve not done a larger scale fabric server so I can’t offer much advice in optimizing it but the client speed ups available through fabric look very impressive.

    If you are running a server without world borders or with a lot of simultaneous players I’d look in depth on what ssd you’re saving the world to. You want dram cache, random write speeds are way more important than sequential. If you can find an Intel optane for cheap they are pretty amazing. The ssd is less important than your cpu and having enough ram to run the server.

    Generally an older gaming pc is better than an older server. Again you are targeting single threaded performance. If you are purchasing hardware it might make more sense to go with lower end new hardware than higher end old hardware. It’s all about trade offs for your use case and budget. For a long time I just used my main pc to play games and host servers (ram is cheaper than another pc) but I tinker too much to keep good ‘server’ uptime.

    Transcoding can get pretty taxing on a system but any semi modern quad core can handle a few 1080p streams or a 4k stream. Plus you can use a gpu for transcoding. The nice thing is it scales with core count pretty well so older server or workstation hardware works well.

    • Tippon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      The only thing I’d add to this is that the people who make Paper Minecraft are working on Folia, a multi threaded server. It’s probably worth looking into if you’re starting from scratch :)

    • Sethayy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Fabric has some amazing open source projects dedicated to performance.

      Idk if any multithread it yet but its my current go to for low end systems

  • whofearsthenight@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    There are a few things I’d consider:

    • How many users are going to be on the MC server? MC is pretty notorious for eating RAM, and since most of my home server adventures often includes multiple VMs, I would look for something with at least 32 gb of ram.
    • for plex (I’m guessing similar is going to be the case for Jellyfin) how many users do you expect to support concurrently, and how good are you at downloading in formats that the clients support direct play for? Most remote plex users are going to require transcoding because of bandwidth limits, but if you have direct play for most of your local clients or have a good upload and don’t have to transcode 3+ streams at a time, you’re probably fine with just about anything from the last 10 years in terms of CPU.
    • also re: plex, do you have any idea in terms of storage requirements? Again, if you’re just getting started < 10 tb of storage in mind, you can get by with most computers.

    Anyway, to give you an idea, I run both of these and quite a few other things besides on a Dell R710 I bought like 4 years ago and never really have any issue.

    My suggestion would be grab basically any old computer laying around or hit up eBay for some ~$100-$200 used server (be careful about 1u’s or rack mounts in general if noise is a concern, you can get normal tower-case servers as well) and start by running your services on that. That’s probably just about what all of us have done at some point. Honestly, your needs are pretty slim unless you’re talking about hosting those services for hundreds of people, but if you’re just hosting for you and a few friends or immediate family, pretty much any any computer will do.

    I wanted to keep things very budget conscious, so I have the r710 paired with a rackable 3016 jbod bay. The r710 and the rackable were both about $200, and then I had to buy an HBA card to connect them, so another $90 there. The r710 has 64 gb of ram and I think dual Xeons plus 8 2.5" slots. The rackable is 16 3.5" slots, so what this means is I basically don’t have to decommission drives until they die. I run unRAID on the server, which also means that I can easily get a decent level of protection for drive failure, and I don’t have to worry about matching up drives and all that. I put a couple of cheap SSDs in the 710 for cache drives and to run things I wanted to be a little more performant (MC server, though tbh I never really had an issue running it on spinning disks) and this setup has been more or less rock solid for about 5 years now hosting these services for about 10 people.

  • aidanbell@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    Honestly for what you’re trying to accomplish, any PC built in the past 10 years would suffice.

    I’d say the bigger issue would be what server operating system you’d want to run. Personally I use UnRaid and I love it, all of the apps you mention and more are available as premade docker templates in the Community Apps plugin. I’ve tried Windows and FreeNas before but I find UnRaid just so user friendly and reliable.

  • Monkey With A Shell@lemmy.socdojo.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Media servers can be pretty demanding particularly when doing on the fly transcoding. Look for refurbished servers, big companies routinely toss perfectly good hardware as part of product lifecycle management. A favorite of mine is called ‘techmikeny’ although their site and search is pretty janky.

    I/O performance needs to be considered along with the number of processing threads which really comes into play if you have a lot of virtual machines/containers running. Less than $1000 upfront and you can get well more than you think you need, and have space to improve. I’d say focus on CPU first, it’s easy to add memory and storage later if you buy a big enough box to have extra slots open, but adding CPUs is more of a pain.

    Electricity and noise should be a thought too. My largest box is using about 240 watts right now and if you go with actual rack servers they tend to be loud with a half dozen fans running at 6000 rpm or so. If you can stash it somewhere out of your living space all the better.

      • CmdrShepard@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        The power is only needed for transcoding. Multiple 4k streams should be little more than directly serving up the files to the client machine (like your TV) which consumes very few resources. You should avoid transcoding 4k down to 1080p or 720p by either avoiding 4k content, grabbing only stuff that is directly compatible, or having duplicate copies of stuff in 4k and 1080p so that the 1080p file gets transcoded if needed.

        Many of us have separate 4k libraries on our servers to prevent any possibility of transcoding it (like for remote streams when you don’t have the upload speed to stream 4k directly). Like for example i have about a dozen family members using my server remotely but I don’t share my 4k libraries with them since the best upload I can get with Comcast is 12Mbps. In the Plex settings I have everyone limited to 3-4Mbps so that I can handle 3-4 people watching remotely at once which leads to these streams getting transcoded down to 720p.

          • CmdrShepard@lemmy.one
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            That was just an example of when you might need to transcode multiple streams at once. Typically you shouldn’t need to transcode anything especially if you’re just watching at home. In that case you can have dozens of streams in any resolution running at once without the computer sweating at all.

      • Monkey With A Shell@lemmy.socdojo.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        In a transient way I might say rather than constantly. I use Emby and when something is streaming to a Roku in a format that’s not native it ends up using something around 80% of the allocated power. I don’t use the throttling option though so it’s actually working well ahead of the stream and finishes up a full movie in a few minutes rather than going along in realtime.

        So yeah it could be heavily mitigated but I’d rather just have it done rather than hoping it’s smart enough plan ahead.

      • maxprime@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Yes exactly. QuickSync has been on Intel CPUs (i5 and up) since Sandy Bridge. But I’ve heard that only since 4th gen has it been out.

        I would recommend a used SFF PC for docker, and a separate NAS like a Qnap for file storage.

  • Dandroid@dandroid.app
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 months ago

    I just use my old gaming PC, GPU and all. I self host quite a few services on it and I have yet to find something that puts it into high usage.

  • JohnWorks@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    It really kinda depends on what type of Minecraft server you’re running. I was running Plex and Minecraft on unRaid with like 16gigs of ram and an i3 8100 and it was fine until I started doing more intense moded Minecraft. The iGPU in Intel processors can handle transcoding really well so it’s a pretty good all in one solution. I imagine if you’re going to heavily modded Minecraft you could probably get away with a current gen i5 or maybe even i3 if you’re on a budget. Looks like the i3 13100 has hyper threading and my old 8100 didn’t. Not sure how big of a difference it would make.

  • Bloved Madman@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 months ago

    I’ve gone with Unraid and consumer level hardware (intel i3 12100 and 16gb of standard ddr4 ram) the only “server hardware” I have, is an LSI HBA card that’s in IT mode so I can connect more HDDs.

    I’m even used SMR drives in my array, just use a good CMR drive for parity and the biggest SSD you can get for your cache drive and you will be good to go.

  • Tippon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    I don’t have any better hardware suggestions than you’ve already been given, but I would recommend avoiding Plex.

    I get problems with it pretty much every day. Usually it’s the client stuttering and crashing, but I often get the client connecting to the server, but not getting a list of media back.

    I’ve bought a lifetime licence, but I’m looking into switching to Jellyfin because it’s so frustrating.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    Don’t get server hardware, use regular desktop/laptop machines as they’ll be more than enough for you. Server hardware is way more expensive and won’t be of any advantage. If you’re looking to buy you can even get very good 9-10th gen Intel CPUs and motherboards that are perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games. This hardware is also way more power efficient and sometimes even more powerful than any server hardware that you might get for the same price. Get this hardware for cheap and enjoy.

    • Case@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      I’ve got enterprise level hardware, rack moubtable all that jazz.

      Between the cost of power, and the heat it generates (which uses more AC and thus power) its not feasible to run it.

      I’m looking into clustering some raspberry pis for a more power (and heat) efficient hardware as my next project. Barely scratched the surface of research though.

      So hey, if anyone has any tips or links, it would be much appreciated.

        • Case@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Cost and a personal bias, also I’ve seen more helpful communities amongst Linux and FOSS advocates than trying to deal with a big brand.

          I’ve done a lot of IT stuff in my life, even before working in IT.

          I’ve seen too many issues from big brands, and its usually caused by the company.

          I have a Pi 2 from way back. I’ve thrown so many distros at that thing over time, and without fail I don’t run into any problems I didn’t personally create while learning or through human error.

          I understand all too well that those big brands have support for businesses, warranties, etc. It makes them cost effective long term for business. At a personal level I just don’t see the benefits outweighing the negatives.

          Again, personal bias. Same core reason I avoid apple products, bias, though I mainly dislike apples cost combined with their closed off, well, everything.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            Yes but ARM is great but compared to server hardware it is shit when it comes to performance and reliability. If you come from server hardware and you really max it out you’re going to have a poor experience.

            Also I personally like to avoid the Raspberry Pi and their stuff as much as possible. They’ve done good things for the community however they’ve some predatory tactics and shenanigans that aren’t cool. Here a few examples of what people usually fail to see:

            • Requires a special tool to flash. In the past it was all about getting a image and using etcher, dd or wtv to flash it into a card, now they’re pushing people to use Raspberry Pi Imager. Without it you won’t be able to easily disable telemetry and/or login via network out of the box;
            • Includes telemetry;
            • No alternative open Debian based OS such as Armbian (only the Ubuntu variant);
            • Raspberry Pi 5 finally has PCI. But instead of doing what was right they decided to include some proprietary bullshit connector that requires yet another board made by them. For those who are unware other SBC manufacturers simply include a standard PCI slot OR a standard NVME M2 slot. Both great option as hardware for them is common and cheap;
            • It is overpriced and behind times.

            For what’s worth the NanoPi M4 released in 2018 with a RK3399 already had a PCI interface, 4GB of RAM and whatnot and was cheaper than the Raspberry Pi 3 Model B+ from the same year that had Ethernet shared with the USB bus.

            If you don’t want those big brands (I only suggested them because they’re cheap second hand) build something yourself on consumer hardware or pick a Chinese brand.

            Those big brands are cheap though, for 100€ you can get an HP Mini with an i5 8th gen + 16GB of ram + 256GB NVME that obviously has a case, a LOT of I/O, PCI (m2) comes with a power adapter and more importantly it outperforms a RPi5 in all possible ways. Note that the RPi5 8GB of ram will cost you 80€ + case + power adapter + bullshit pci adapter + sd card + whatever else money grab.

            Side not on alternative brands, HP mini units are reliable the BIOS is good and things work. Now the trendy MINISFORUM is cool however their BIOS come out of the factory with wired bugs and the hardware isn’t as reliable - missing ESD protection on USB in some models and whatnot.

            • Case@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 months ago

              Performance isn’t key. But I like performance, lol. I also wasn’t aware of their more recent practices. So thank you.

              I’ll have to check out the HP mini. As I said, just barely scratched the surface on researching this, and its more of a thought than a project at the moment, lol.

              I just can’t afford (and cool) enterprise level stuff at home. It was free (to me) so no big loss other than buying a better CPU used ~50 bucks. I’ve spent more on worse ideas lol.

              • TCB13@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                I was just trying to share a bit of my experience, I too have datacenter / server hardware experience and have dealt with a ton of mini computers and those ARMs and Chinese brands aren’t what one usually expects at the most fundamental details.

    • Dandroid@dandroid.app
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I used to host Plex on a Synology. It’s okay, but it struggles when skipping around. And downloads for offline viewing would fail almost always. I have had a much better experience since switching to my old gaming PC with a GPU.

    • ebits21@lemmy.ca
      cake
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Great as a NAS but pretty low specced for hosting many more demanding applications for the cost.

      I run Plex on mine but was faster on a raspberry pi 3 lol.

      For file management, backups, etc. It’s stellar.

  • spudwart@spudwart.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    9 months ago

    IF you’re going to run your self-hosted server to run a minecraft server, get a webclient front-end. Even though it’s not FOSS and therefore a no-no with most lemmy users, AMP from Cube Coders is a great option, it starts a flat $10 fee for a permanent license. It has some fantastic features for remote backup among supporting other games.

    However, if you’re a bit concerned about that there is Pterodactyl, which I’ve heard good things about, but I went with AMP because it supports other game servers like GMOD.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    9 months ago

    Hardware wise, you just need a good PC. One thing to note is that graphics are almost irrelevant for servers. In your case, it would help to have AV1 encoding, so you could go with a $110 Intel A380 or A310.

    The most important thing is RAM. The more server applications you start putting on there, the more RAM you’ll need. 16GB is fine for what you need right now, but make sure your mobo has two extra slots so you can up it to 32 if needed.

    Storage, it’s really up to you. If you want everything on an NVMe, great! If you want everything in a RAID array, expensive, but great! Using mdadm for RAID arrays is fairly easy, just a lot of reading. Make sure you have enough SATA ports to support all the disks you need if you want that.

    CPU, avoid ones with integrated graphics to save cost. Unless you’re buying one that does AV1 encoding, then you don’t need the A380.

  • aodhsishaj@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    1ct KAMRUI Mini PC Intel N5105 Windows 11 Pro 8GB RAM 256GB SSD WIFI6 4K Dual LAN (proxmox arbiter and also this will run your pfsense/opensense firewall vm/appliance)

    2ct UM350 Mini PC 16 GB RAM 512 GB PCIe SSD AMD Ryzen 5 3550H Mini Computers (both of these will be the prox 2 and prox3 worker nodes)

    Buffalo TS5410DN1604 TeraStation (NAS)

    1ct Cisco Catalyst WS-C3650-24PS-S 24-Port Gigabit Ethernet Switch TESTED

    All of that together with the Buy it now options are around $1000 USD total with shipping. That Buffalo NAS will need 3.5" drives. So add that to the cost.

    • mr47@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      That is a complete overkill. You don’t need a cluster of Proxmox nodes for personal hosting. And you certainly don’t need a 24-port switch.

  • Nephalis@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    I have something to read for you :

    My Request

    It is a request of me from earlier this year. The boards I mention in the opening post are no good choice. But the Asrock J500x or J5040 (the one I picked in the end) are. For my needs it is enough of everything. Even if some users here think the celerons are “heaters that can do math” ^^

    On the other hand, the cpu is soldered to the board. No upgrade without switching the board either… Even the SODIMM ram needs to be replaced when switching away from an itx-board…

    On the other hand, it is less energy consuming than using an old desktop cpu etc.

    The pico-psu is just sweet 😊

    Edit: fixed link

  • MrPoopyButthole@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    You don’t need to buy server hardware, although it is nice. Depending on where you live you might be able to buy some decent second hand server hardware.

    If it was me, I would buy new desktop hardware. Here is a fairly decent server that will do almost anything: Go for around 16 or 24 core CPU with high Ghz per core. 64GB or 128GB DDR5 RAM. Your most important factor will be storage speed. Go with NVMe drives. You have some choices here. JBOD: One or more independent M.2 key drives. Software RAID: Use your CPU to manage the RAID configuration. Hardware RAID: Use a RAID controller HBA card to manage the RAID (faster but single point of failure). Use RAID 1 for data protection (can lose one drive and still have all your data), RAID 0 (double the speed of your drives), RAID 10 (best of both but needs double the drives). Choose a motherboard that suits your choices.

    Things to take into account: If you go with a RAID controller card, make sure that the PCIe lanes it uses can take the full speed of your RAID configuration or you might be bottlenecked there. Choosing an Intel or AMD CPU doesn’t make much difference. If you are not good with linux distros and don’t want a learning curve, stick with something like Ubuntu LTS 22.04 server. You most likely won’t need any graphics card, but it depends what you want to do.

    You can run a minecraft server on an old laptop, so these specs might be overkill, I just put what I would get and it will do almost anything you want to do with it. An 8 core CPU, 16GB RAM, with 1 NVMe drive will also be capable of all your described needs just fine.