![](/static/ef72c750/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
+1 for the main risk to my service reliability being me getting distracted by some other shiny thing and getting behind on maintenance.
+1 for the main risk to my service reliability being me getting distracted by some other shiny thing and getting behind on maintenance.
I started as more “homelab” than “selfhosted” as first - so I was just stuffing around playing with things, but then that seemed sort of pointless and I wanted to run real workloads, then I discovered that was super useful and I loved extracting myself from commercial cloud services (dropbox etc). The point of this story is that I sort of built most of the infrastructure before I was running services that I (or family) depended on - which is where it can become a source of stress rather than fun, which is what I’m guessing you’re finding yourself in.
There’s no real way around this (the pressure you’re feeling), if you are running real services it is going to take some sysadmin work to get to the point where you feel relaxed that you can quickly deal with any problems. There’s lots of good advice elsewhere in this thread about bit and pieces to do this - the exact methods are going to vary according to your needs. Here’s mine (which is not perfect!).
I still have lots of single points of failure - Tailscale, my internet provider, my domain provider etc, but I think I’ve addressed the most common which would be hardware failures at home. My monitoring is also probably sub-par, I’m not really looking at logs unless I’m investigating a problem. Maybe there’s a Netdata or something in my future.
You’ve mentioned that a syncing to a remote server for backups is a step you don’t want to take, if you mean managing your own is a step you don’t want to take, then your solutions are a paid backup service like backblaze or, physically shuffling external USB drives (or extra NASs) back and forth to somewhere - depending on what downtime you can tolerate.
+1 for Syncthing. I run it on a server at home, then on my MacBook over Tailscale. For web access I run FileBrowser (also over Tailscale) against the same directory.
I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.
Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.
So -
I routinely run my homelab services as a single Docker inside an LXC - they are quicker, and it makes backups and moving them around trivial. However, while you’re learning, a VM (with something conventional like Debian or Ubuntu) is probably advised - it’s a more common experience so you’ll get more helpful advice when you ask a question like this.
I’m also on Silverbullet, and from OP’s description it sounds like it could be a good fit. I don’t use any of the fancy template stuff - just a bunch of md files in a directory with links between them.
For light touch monitoring this is my approach too. I have one instance in my network, and another on fly.io for the VPSs (my most common outage is my home internet). To make it a tiny bit stronger, I wrote a Go endpoint that exposes the disk and memory usage of a server including with mem_okay and disk_okay keywords, and I have Kuma checking those.
I even have the two Kuma instances checking each other by making a status page and adding checks for each other’s ‘degraded’ state. I have ntfy set up on both so I get the Kuma change notifications on my iPhone. I love ntfy so much I donate to it.
For my VPSs, this is probably not enough, so I am considering the more complicated solutions (I’ve started wanting to know things like an influx of fali2ban bans etc.)
If this is a question about how to access your server at home from devices anywhere, securely, with a simple setup, then the answer is turn off all that port forwarding, and use Tailscale.
With a somewhat similar usecase, I ended up using Kavita.
Yo dawg, I put most of my services in a Docker container inside their own LXC container. It used to bug me that this seems like a less than optimal use of resources, but I love the management - all the VM and containers on one pane of glass, super simple snapshots, dead easy to move a service between machines, and simple to instrument the LXC for monitoring.
I see other people doing, and I’m interested in, an even more generic system (maybe Cockpit or something) but I’ve been really happy with this. If OP’s dream is managing all the containers and VM’s together, I’d back having a look at Proxmox.
This is where I landed on this decision. I run a Synology which just does NAS on spinning rust and I don’t mess with it. Since you know rsync this will all be a painless setup apart from the upfront cost. I’d trust any 2 bay synology less than 10 years old (I think the last two digits in the model number is the year), then if your budget is tight, grab a couple 2nd hand disks from different batches (or three if you budget stretches to it,).
I also endorse u/originalucifer’s comment about a real machine. Thin clients like the HP minis or lenovos are a great step up.
This. Hosting at home might be cheaper if you are serving a lot of data, but in that case, the speed’s going to kill you.
I’m a keen self-hoster, but my public facing websites are on a $4 VPS (Binary Lane - which I recommend since you’re in Aus). In addition to less hassle, you get faster speeds and (probably) better uptime.
Thanks - I thought it would be something like this I just hadn’t made the effort. Calibre-web just runs as a server?
I’ve just been down this exact journey, and ended up settling on Kavita. It has all the browse, search and library stuff you’d expect. You can download or read things in the web interface. I’m only using it for epub and PDF books, but its focus is comics and manga so I expect it to shine there.
I don’t think it does mobi, but since I use Calibre on my laptop to neaten up covers and metadata before I drop books on to the server it’s a simple matter to convert the odd mobi I end up with. Installation (using docker inside an LXC) was simple.
It’s been a really straightforward, good experience. Highly recommend. I like it better than AudioBookshelf (which I’m already hosting for audio books) which I also tried, but didn’t like as much for inexplicable reasons. I also considered Calibre-Web, but that seemed a bit messy since I guess I’d use Calibre on my laptop to manage my books on a NAS share then serve it headless from the server with Calibre-Web? I might have that completely wrong, I didn’t spend any time looking into it because Kavita was the second thing I tried and it did exactly what I wanted.
There’s a project called Filebrowser that allows you to edit text files in a web interface. You can just run that on the 192.168.1.2 machine. It’s easy to set up simple auth, and you can restrict it to the /data/ directory.
Built my own quad-copter and flew it around. Had to flash plane ESC’s with custom firmware, wire it all up manually to a controller and muck around with the values to tune it, then you could hand fly it (very carefully). It was amazing! - an RC plane that could hover.
Nowadays, if I go somewhere and some normie’s “flying” a DJ, I’m annoyed with them. It’s really breathtaking how good these got so quickly.
+1 for Tailscale. It’s a vital piece of the system for me now.
Your head might be spinning from all the different advice you’re getting - don’t worry, there are a lot of options and lots of folk are jumping in with genuinely good (and well meaning) advice. I guess I’ll add my two cents, but try and explain the ‘why’ of my thinking.
I’m assuming from your questions you know your way around a computer, can figure things out, but haven’t done much self-hosting. If I’m wrong about that, go ahead and skip this suggestion.
Same, but with the jellyfin/jellyfin image. Been solid for me, less dramas than raw on the OS. Two cores and 8GB for the VM (in Proxmox), media on a NAS, metadata on local SSD.
Yeah na, put your home services in Tailscale, and for your VPS services set up the firewall for HTTP, HTTPS and SSH only, no root login, use keys, and run fail2ban to make hacking your SSH expensive. You’re a much smaller target than you think - really it’s just bots knocking on your door and they don’t have a profit motive for a DDOS.
From your description, I’d have the website on a VPS, and Immich at home behind TailScale. Job’s a goodun.