Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?

  • Pika@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 months ago

    I’m currently running proxmox on a 32 gig server running a ryzen 5600 G, it’s going fine the containers don’t actually use all that much RAM and personally I’m actually seeing a better benchmarks than I did when I just ran as a Bare Bones Ubuntu server, my biggest issue has actually been a larger IO strain than anything, because it’s a lot more IO heavy now since everything’s containerized. I think I easily could run it with a lower amount of ram I would just have to turn off some of the more RAM intensive items

    As for if I regret changing, no way Jose, I absolutely love the ability of having everything containerized because I can set things up how I want it when I want it and if I end up screwing something up configuration wise or decide that I no longer need that service I can just nuke the container without having to remember well what did I install on this program so I can remove it and do other programs need this dependency to work. Plus while I haven’t tinkered as much in this area, you can hard set what resources you want a lot to each instance, so if you have a program like say a pi hole that you know is never going to use x amount of resources to be able to appropriately work you can restrict what it can do so if something does go wrong with it it doesn’t use all of your system resources

    The biggest con out of it is probably having to figure out how to do the networking side because every container is going to have a different IP address, I found using a web dashboard is my friend because I can have heimdel tell me where all my services are and I just have to click the icon to bring me to the right IP address, it took a lot of work to figure out how it’s operational and how to get it working, but the benefits I’ve gotten of having it is amazing. Just make sure you have a spare disk to temporarily clone partitions to because it’s extremly difficult to use existing disks in the machine. I’ve been slowly going one at a time copying it over to an external drive nuking the and then reinitializing the disc as part of the proxmox lvm and then copying the data back over onto their appropriate image file.