Many of the posts I read here are about Docker. Is anybody using Kubernetes to manage their self hosted stuff? For those who’ve tried it and went back to Docker, why?
I’m doing my 3rd rebuild of a K8s cluster after learning things that I’ve done wrong and wanted to start fresh, but when enhancing my Docker setup and deciding between K8s and Docker Swarm, I decided on K8s for the learning opportunities and how it could help me at work.
What’s your story?
I run a 2 node k3s cluster. There are a few small advantages over docker swarm, built-in network policies to lock down my VPN/Torrent pod being the main one.
Other than that writing kubernetes yaml files is a lot more verbose than docker-compose. Helm does make it bearable, though.
Due to real-life my migration to the cluster is real slow, but the goal is to move all my services over.
It’s not “better” than compose but I like it and it’s nice to have worked with it.
I run k3s and all my stuff runs in it no need to deal with docker anymore.
I’m not very familiar with kubernetes or k3s but I thought it was a way to manage docker containers. Is that not the case? I’m considering deploying a k3s cluster in my proxmox environment to test it out.
Kubernetes is abbreviated K8s (because there’s 8 letters between the “k” and the “s”. K3s is a “lite” version. Generally speaking, kubernetes manages your containers. You basicaly tell K8s what the state should be and it does what it needs to do to get the environment as you’ve declared. It’ll check and start or restart services, start containers on a node that can run them (like ensuring enough RAM is available). There’s a lot more, but that’s the general idea.
How did you write your templates? Did you use Kompose to translate from Docker compose files, or did you write them from scratch?
Could you list some of your “stuffs” that you run on your k3s? I’m curious.
Oh it is not that much, I run adguard DNS with adblocking, searxng as my search engine, vaultwarden as my password manager. All combined with Argo CD as GitOps engine, nginx ingress with cert-manager for lets encrypt certificates, longhorn as storage layer and metallb as loadbalancer solution. I am planning to completely replace my current setup (which is an old sandy bridge powered HP microserver) with a turing pi 2 clusterboard with 4 RPi4 CMs as soon as they get cheaper.
Wow you’re self-hosting a password manager! Don’t you feel scared if something went wrong?
I’m also running Adguard as my DNS-level adblocker on my Pi 3. Feels way more content than Pihole.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System Git Popular version control system, primarily for code HA Home Assistant automation software ~ High Availability HTTP Hypertext Transfer Protocol, the Web LXC Linux Containers NAS Network-Attached Storage SSD Solid State Drive mass storage SSH Secure Shell for remote terminal access VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) k8s Kubernetes container management package nginx Popular HTTP server
11 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.
[Thread #82 for this sub, first seen 26th Aug 2023, 23:55] [FAQ] [Full list] [Contact] [Source code]
HA is high availability. Home Assistant is usually shortened to HASS.
It does list that as another possible meaning, if I’m reading the table correctly
Good catch! I didn’t notice that.
Kubernetes is useful if you have gone full cattle over pets. And that is very uncommon in home setups. If you only own one or two small machines you cannot destroy infra easily in a “cattle” way, and the bloatware that comes with Kubernetes doesn’t help you neither.
In homelabs and home servers the pros of Kubernetes are not very useful: high availability, auto-scaling, gitops integrations, etc: Why would you need autoscaling and HA for a SFTP used only by you? Instead you write a docker-compose.yml and call it a day.
This mostly, I haven’t seen a compelling reason to leave my docker setup.
I think the biggest reasons for me have been growth and professional development. I started my home cluster 8 years ago as a single node of basically just running the hack/ scripts on my Linux desktop. I’ve been able to grow that same cluster to 6 hosts as I’ve replaced desktops and as I got a bit into the used enterprise server scene. I’ve replaced multiple routers and moved behind cloudflare, added a private CA a few times, added solid persistence with rook+ceph, and built my ideal telemetry stack, added velero backups into Backblaze b2, and probably a lot more I’m not thinking of.
That whole time, I’ve had to do almost zero maintenance or upgrades on the side projects I’ve built over the years, or on the self hosted services I’ve run. If you ignore the day or so a year I’ve spent cursing my propensity to upgrade a tad too early and hit snags, though I’ve just about always been able to resolve them pretty quickly and have learned even more from those times.
And on top of that, I get to take a lot of that expertise to work where it happens to pay quite well. And I’ve spent some time working towards building the knowledge into a side gig. Maybe someday that’ll pay the bills too.
One line from your comment struck a chord. The part about maintenance and upgrades. I feel like I get stuff set up and working and go about my life and then a failure happens at the most inopportune moment. Mostly, the failures are when I have a few hours free and decide to upgrade the OS and everything breaks and all the dependencies fall apart and some feature is no longer supported. That’s where I started looking to K8s to just roll back until I have time to manage it.
While you’re probably right overall, there are many good reasons to use k8s. The api provides all sorts of benefits. Kubectl, k9s, and other operational UIs . Good deployment models and tools like argo. Loads of helm charts that are (theoretically) ready to use.
No, those things aren’t free. There’s a lot of overhead to running k8s.
Love is a strong word, but kubernetes is definitely interesting. I’m finishing up a migration of my homelab from a docker host running in a VM managed with Portainer to one smaller VM and three refurbished lenovo mini PCs running Rancher. It hasn’t been an easy road, but I chose to go with Rancher and k3s since it seemed to handle my usecase better than Portainer and Docker Swarm could. I can’t pass up those cheap mini PCs
Does rancher connect the pcs together? I have like 3 mini pcs sitting around, and I’ve always wanted to kinda combine them somehow
Like being able to combine cpu power or something. Idk if this is possible without getting a mobo with multiple cpu slots, but if it is. I’d love to learn!
Yeah, Kubernetes is designed to run in a cluster so you can pool processing power and memory from multiple devices. I banged my head against the wall for hours trying to figure out how to set up a cluster by hand, but then discovered if you install Rancher in a regular docker container it can handle all that for you
No shit. So you’re saying I can hook up like three mini pcs and make a mega at home server!? I gotta look into this. Did you follow a guide or anything you think is good enough or is as easy as a Google?
My recommendation is to look into k3sup and Rancher. I had a lot of trouble trying to install rancher in a docker container and migrating to a cluster after, and k3sup makes it really easy to set up a k3s cluster without having to configure everything manually
You can accomplish the same task with docker swarm, but I figured it would be better to learn something that wasn’t abandonware
I haven’t dug into the storage side yet since I have a separate NAS, but it will probably be beneficial to set up something like Ceph, GlusterFS, or Longhorn if you don’t have one
Oh I just realized this is for kubernates. Unraid is all dockers. Can a docker swarm also pool resources?
Yep, similar concept. Not sure how well unraid will handle the swarm behavior but I can imagine there’s someone out there who has tried it before
Went swarm instead. I dont need a department of k8s consultants.
I do aks. I can’t say love is the right word for it. Lol
AKS is a shame. Most of azure, actually. I do my best to find ways around the insanity but it always seems to leak back in with something insane they chose to do for whatever Microsoft reason they have.
No “love” from my side, I have thousands of users, not thousands of servers so that’s not a solution for any of my problems :)
I like the concept, but hate the configuration schema and tooling which is all needlessly obtuse (eg. helm)
Helm is one of the reasons I became interested in Kubernetes. I really like the idea of a package where all I have to do is provide my preferences in a values file. Before swarm was mature, I was managing my containers with complicated shell scripts to bring stuff up in the right order and it became fragile and unmaintainable.
Seems a bit overkill for a personal use selfhosting set-up.
Personally, I don’t need anything that requires multiple replicas and loadbalencers.
Do people who have homelabs actually need them? Or is it just for learning?
I find mine useful as both a learning process and as a thing need. I don’t like using cloud services where possible so I can set things up to replace having to rely on those such as next loud for storage, plex and some *arr servers for media etc. And I think once you put the hardware and power costs vs what I’d pay for all the subs (particularly cloud storage costs) it comes out cheaper at least with hardware I’m using.
Yes, those are all great uses of it. But could all still be achieved with docker containers running on some machines at home, right?
Have you ever had a situation where features provided by kubernetes (like replicas, load balancers, etc) came in handy?
I’m not criticizing, I’m genuinely curious if there’s a use-case for kubernetes for personal self-hosting (besides learning).
I was a big proponent of k3s in the homelab, but I’m starting to think otherwise these days. I still expel choice words towards Docker’s networking, but it starts becoming more of a philosophical issue with what the company is doing and whoever decided this kind of networking is nice.
Is the networking on Podman any better? I understand using k8s at home to learn, but what if you don’t care about learning? I have never seen a point to k8s in homelabs other than in home-datacentres, and I’m starting to veer away from k3s too, since I don’t need extreme HA over 3 machines for my services (I would have used Proxmox if I wanted that).
Yeah, could someone give me a primer on how Podman is better than Docker? I’m adamant that I don’t want to use anything with the name “Docker” in my lab.
For me, I find that I learn more effectively when I have a goal. Sure, it’s great to follow somebody’s “Hello World” web site tutorial, but the real learning comes when I start to extend it to include CI/CD for example.
As far as a use case, I’d say that learning IS the use case.
A lot of people thought this was the case for VMs and docker as well, and now it seems to be the norm.
A lot of people thought this was the case for VMs and docker as well, and now it seems to be the norm.
Yes, but docker does provide features that are useful at the level of a hobbyist self-hosting a few services for personal use (e.g. reproducibility). I like using docker and ansible to set up my systems, as I can painlessly reproduce everything or migrate to a different VPS in a few minutes.
But kubernetes seems overkill. None of my services have enough traffic to justify replicas, I’m the only user.
Besides learning (which is a valid reason), I don’t see why one would bother setting it up at home. Unless there’s a very specific use-case I’m missing.
I love kubernetes. At the start of the year I installed k3s von my VPS and moved over all my services. It was a great learning opportunity that also helped immensely for my job.
It works just as well as my old docker compose setup, and I love how everything is contained in one place in the manifests. I don’t need to log in to the server and issue docker commands anymore (or write scripts / CI stages that do so for me).
Are most of your services just a single pod? Or do you actually have them scaled? How do you then handle non-cloud-native software?
The Lemmy instance I’m speaking from right now is running in my k8s cluster.
I was looking into converting my docker services into a cluster to get high availability and to learn it for work, but while investigating it, I read that kubernetes is actually meant for scalability and just a single service per cluster.
Also read that docker swarm is actually what is recommended for my homelab use case. So I’m right now on my way to convert everything to docker stacks. What do you think?
I’m not sure what you mean by that.
It provides high availability if you have multiple nodes and pods.
Also what do you mean by single service per cluster? Because that’s not the idea at all.
Of course high availability always requires multiple nodes.
Its just that while choosing how to set up my cluster I looked up several options (proxmox, swarm, kubernetes…) and I noticed that kubernetes is generally meant for bigger deployments.
I only need a single replica for each of my containers and they can all run on a single node, so kubernetes is overkill just to get high availability For my use case
Kubernetes is awesome for self hosting, but tbh is superpower isn’t multi-node/scalability/clustering shenanigans, it’s that because every bit of configuration is just an object in the API, you can really easily version control everything - charts and config in git, tools like Helm make applying changes super easy, use Renovate to do automatic updates, use your CI tool of choice to deploy on commit, leverage your hobby into a DevOps role, profit
Here’s a slightly different story: I run OpenBSD on 2 bare-metal machines in 2 different physical locations. I used k8s at work for a bit until I steered my career more towards programming. Having k8s knowledge handy doesn’t really help me so much now.
On OpenBSD there is no Kubernetes. Because I’ve got just two hosts, I’ve managed them with plain SSH and the default init system for 5+ years without any problems.