How do you monitor your server containers, disks, load…?
Do you use an easy-to-use web interface? Do you do everything via SSH? Or maybe you’ve got a more complicated setup?
I want to change my setup and I’m looking for new ideas, I’ve been using Cockpit for some years and some of the plugins are really outdated (ZFS for example) and others are completely broken (docker-compose).
I just use homepage as my homepage :D
I can see simple CPU/RAM/storage stats and got widgets for almost all services, one of them is portainer so I can see if any service is stopped (most of them are running in docker). Also few services send notification on error or update
I know its not really a monitoring tool, but it works well enough for me
deleted by creator
My clients when they text me the server is down.
This has the same energy as my spouse yelling at me because jellyfin went down
Or my partners greeting me in the morning “Home assistant went down again, so the lights are all manual”
Thankfully that one is mostly solved.
So damn accurate ahhaha
Um, Proxmox?
I am running everything in docker compose so I’ve never found a use to it that justifies the waste of power
docker-compose doesn’t scale well and if you run it natively it is a little less secure.
Virtualization adds 1-2% of overhead at most and gives you way more control of how the hardware is used.
If you setup is small docker-compose might be easier to manage but as soon as you get more hardware it becomes the limiting factor. I still use docker-compose but now I run it in a VM
I switched from docker compose to pure Ansible for deploying my containers. Makes managing config and starting containers across multiple hosts super easy. I considered virtualizing too but decided it didn’t offer me enough advantages. If I ever have an issue with the host OS I just reinstall using a preseed file and then rerun my playbooks and it’s ready to go.
deleted by creator
systemd
can be used to run whatever notification scheme you would like to use whenever some service fails. Here’s an example of how to do it: https://www.baeldung.com/linux/systemd-service-fail-notificationZabbix
Second Zabbix. Been using it for years and it just works.
Zabbix for agent / snmp based statistics.
Uptime Kuma for up/down states with a webhook notification into Discord so I get instant alerts on my phone when one goes down.
I’m a huge fan of Netdata, very configurable and monitors just about anything you could want. Great interface and alerts too - https://www.netdata.cloud/
I was looking for something free that I could host on my machine but thanks, I didn’t know about it
As others stated, you can run and access the interface locally (or setup your own reverse proxy) for free. Their Cloud dashboard is also free for up to 5 nodes. They recently added a flat-rate “Homelab” plan as well, if you want to remove the limit. It’s all quite usable for $0 otherwise though!
Netdata is free and can be run standalone. Just install it and do not configure the cloud integration. You can see your dashboard on localhost:19999
Oh that’s neat, will take a look! Can you run it on docker?
I love how easy to use NetData is, but when running it on my home servers it destroys their performance lol. Every once in awhile I check in to see if it runs better.
That’s strange, I’ve run it fine on some very underpowered hardware. Are you adding a specific monitoring integration with it, or just out of the box settings?
Just out of the box. I am usually running it as a container on UnRAID on an x86 machine. It seems primarily to just be a big memory hog when I’ve tried to use it.
Weird! For reference one VM I run on only has 1 GB of memory, and Netdata uses 100-200 MB. Could be something going on with UnRAID though. Definitely some sort of bug I’d think, since normally resource usage should be very low across the board.
Same been running netdata for years. They’re monetizing now where it used to just be free. Good for them, it’s a great product. And it’s foss
Node exporter on hosts, OpenTelemetry collector to scrape metrics and collect logs, shipping them to Prometheus and Loki, visualising with Grafana.
Day job is for an observability platform where we heavily encourage the use of (and also contribute) to the OpenTelemetry collector project, hence my use of it.
Try VictoriaMetrics. Basically the same feature set as Prometheus, but so much more resource friendly for homelab scale. I store some metrics for 12 months now, because it’s easy.
Similar setup here with additional exporters like cadvisor for container metrics and other components.
OpenTelemetry is awesome, but still a very fast moving project. Expect therefore more frequent updates and changes compared to more older and established projects.
Do you have a name for the opentelemetry collector? I’m interested.
Use the Contrib version of the collector, it has many more receivers, processors and exporters
My own server? YOLO
At work? Grafana, KOBS, Victoria Metrics, Jaeger, OpsGenie, …
This is the first time I’ve heard of Victoria Metrics. It looks like it has a similar use case as Prometheus, is that correct? If so, what made you or your team choose one over the other?
My own server? YOLO
I can’t figure out whether there’s a monitoring tool called YOLO or you don’t monitor anything.
Now I am intrigued to develop one that is called YOLO.
But just in case: no, I don’t monitor my server. If I notice something not working, I ssh into the machine and check what’s up. I don’t want to deal with another zoo of services for the monitoring part.
You are me
I like monit. It’s simple to setup and pretty flexible.
I used it as well until I found out I could just do it with
systemd
. https://www.baeldung.com/linux/systemd-service-fail-notification
Monitorix or Netdata.
“Huh weird, I tried to use <insert service here> and it’s not working. Welp, guess I better fix it…”
I’ve been using uptime Kuma recently and it’s great but works better outside of docker.
Inside docker I’d get a lot of false down positives from I assume docker throttling the checks.
Plus it works with email, telegram, and matrix chat alerts. I monitor all my clients sites with it, and it’s bullet proof behind caddy.
For light touch monitoring this is my approach too. I have one instance in my network, and another on fly.io for the VPSs (my most common outage is my home internet). To make it a tiny bit stronger, I wrote a Go endpoint that exposes the disk and memory usage of a server including with mem_okay and disk_okay keywords, and I have Kuma checking those.
I even have the two Kuma instances checking each other by making a status page and adding checks for each other’s ‘degraded’ state. I have ntfy set up on both so I get the Kuma change notifications on my iPhone. I love ntfy so much I donate to it.
For my VPSs, this is probably not enough, so I am considering the more complicated solutions (I’ve started wanting to know things like an influx of fali2ban bans etc.)
I just do web hosting for clients sites and use Kuma to monitor uptime and SSL certificates.
Ive got multiple Kuma’s running as well.