Hi all!
So I want to get back into self hosting, but every time I have stopped is because I have lack of documentation to fix things that break. So I pose a question, how do you all go about keeping your setup documented? What programs do you use?
I have leaning towards open source software, so things like OneNote, or anything Microsoft are out of the question.
Edit: I didn’t want to add another post and annoy people, but had another inquiry:
What ReverseProxy do you use? I plan to run a bunch of services from docker, and would like to be able to reserve an IP:Port to something like service.mylocaldomain.lan
I already have Unbound setup on my PiHole, so I have the ability to set DNS records internally.
Bonus points if whatever ReverseProxy setup can accomplish SSL cert automation.
Wiki.js Nginx Proxy Manager.
- caddyserver for reverse proxy
- docker-compose for ~75% of documentation
- logseq for notes, though I don’t keep much.
Docker and docker-compose are nice because every service you want to run follows the same basic pattern. You don’t need much documentation beyond the project docs and the compose files themselves
Edit: caddyserver can do automatic certs, even behind a firewall if you set up the api call method. Varies by registrar
Dokuwiki (dokuwiki.org) is my usual go-to. It’s really simple and stores entries in markdown files so you can get at them as plain text files in a pinch. Here’s a life lesson: don’t host your documentation in the machine you’re going to be breaking! Learned that the hard way once or twice.
For reverse proxies, I’m a fan of HAProxy. It uses pretty straightforward config files and is incredibly robust.
I use BookStack and with Node Red I export to PDF the books as soon as pages get updated, so if everything goes feet up, I have all the documentation in PDFs (locally and automatically uploaded to a free DropBox account, still done with Node Red).
I may have to check out BookStack. I dig the looks of it.
I use markdown text files which are synced to my nextcloud instance.
This is somewhat tangential to your post, but I think using infrastructure as code and declarative technologies is great for reliability because you aren’t just running a bunch of commands until something works, you have the code which tells you exactly how things are set up, and you can version control it to roll back to a working state. The code itself can be a form of documentation in that case.
I think I need to utilize this strategy because I get lazy and don’t update external documentation.
Some examples of technologies which follow that paradigm are docker compose, ansible, nixOS and terraform. But it all depends on your workflow.
I think I am going down the docker compose route. When I started using docker, I didn’t use compose, however, now I plan to. Though, Ansible has been on my list of things to learn, as well as nixOS.
Another suggestion for you, I highly recommend specifying a version for the docker image you are using for a container, in the compose file. For example, nextcloud:29.0.1. If you just use :latest, it will pull a new version whenever you redeploy which you may not have tested against your setup, and the version upgrade may even be irreversible, as in the case of nextcloud. This will give you a lot more control over your setup. Just don’t forget to update images at reasonable intervals.
That is good advice, and honestly never really occurred to me to set specific versions for containers.
I run a k3s cluster for selfhosted apps and keep all the configuration and docs in a git repo. That way I have history of changes and can rollback if needed. In that repo I have a docs folder with markdown documents about common operations and runbooks.
There are other ways to do this, but I like keeping docs next to the code and config so I can update them all at the same time. Deployed several wikis in the past but always forget to update them when I change things.
I really should spend time familiarizing with maintaining a git repo. I’ll likely find one I can self host.
If you want a git “server” quick and low maintenance then gitolite is most likely the best choice. https://gitolite.com/gitolite/index.html
It simply acts as a server that you can clone with any git client and the coolest part is that you use git commits to create repositories and manage users as well. Very very or no maintenance at all. I’ve been using it personally for years but also saw it being used at some large companies because it simply gets the job done and doesn’t bother anyone.
I will have to check out gitolite. Thank you!
https://forgejo.org selfhosted has been good for me, FOSS fork of Gitea.
Thank you for the suggestion. The fact that it’s FOSS wins my vote. I have been trying to go all open source where possible.
i recently made the switch from doing k3s+flux to have everything in code with bundlewrap and anemos/makeimg
I have an Ansible playbook that I use to setup everything and all troubleshooting steps I ever had to take to fix something get written down in an Obsidian.md vault.
I have a couple Libre Office files where I document the non-technical stuff for my own quick reference, like network layout in Draw, or IP and port assignments in Calc. I use a git repo to store and organize podman scripts, systemd unit files, configs, etc. Probably not the most elegant solution, but it’s simple and FOSS.
Reverse proxy is Nginx Proxy Manager.
One day, I moved all services I really wanted from a couple of random VPS to a nice little proxmox machine at home (and then added some more services, of course). That was the day I swore to document stuff better, and I’m pretty satisfied with how well I was able to keep up with that.
In the proxmox web interface, you can leave notes per container. I note down which service the container is running including a link to the service’s web interface if applicable, plus the source, and a note about how it auto-updates (green check mark emoji) or if it requires manual updates (handiman emoji).
Further I made a concious effort to document everything into a gollum wiki running on that proxmox host (exposes a wiki like web interface, stores all entries as plaintext .md files into a local git repo - very “portable”). Most importantly, it also includes a page of easy to understand emergency measures in case I die or become unresponsive, which I regularly print out and put into a folder with other important documents. The page contains a QR code linking to itself on the wiki too in case the printed version might be outdated here or there.
The organization of the wiki itself (what goes into which folder) is a bit of a work in progress, but as it offers full text search, that’s not too much of a problem imo.
I use obsidian for my notes/wiki. I use the git plugin to backup/sync my notes. I self-host forgejo as my gut server. Works great!
Caddy is my favorite reverse-proxy. The setup is just a config file.
StandardNotes for me
Traefik for reverse proxy. Tag your container with the route and let traefik take over.
I think Traefik is going to be what I investigate using. However the last time I tried, I was a little lost. I will have to comb over the documentation better this time.
Traefik is powerful and versatile but has a steep learning curve. It also uses code to control its configuration which is a bonus for reliability and documentation as discussed elsewhere ITT. Nginx proxy manager is much simpler and easier to use, may be a good one to get started with, but lacks the advantages of traefik described above. Nginx proxy manager does support SSL cert automation.
Jim’s garage has some videos on it.
OPNSense router handles auto SSL certificate renewals, Unbound (DNS) and HA Proxy ( for reverse proxy ).
Gitea instance for all of my docker-compose configs and documentation.
Joplin server and Joplin clients for easy notes available on all my devices.
- ansible playbook for automated/self-documenting setup
- for one-off bugs or ongoing/long-term problems, open an issue on my gitea instnce and track the investigations and solutions there.
I’m also using ansible everywhere in my home / private infra and lab. Occasionally I get slightly annoyed that I have to open an inventory file or a role var to find something. But in general I’m so grateful that there is one place to find this information, and the same is used to set up everything from scratch.
Is it extra work to write the roles and playbooks? Yes. Does it solve the documentation and automation problem completely? Absolutely. 10/10 would recommend. And for the record, most things I host run on containers, but the volumes and permission management alone make it worth your time.