![](https://reddrefuge.com/pictrs/image/d3dbb279-f5d0-4a76-8120-fbfd10a1dd82.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
I’ve written my wiki so that, if I end up shuffling off this mortal coil, my wife can give access to one of my brothers and they can help her by unpicking all the smart home stuff.
Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.
Be a good motherfucker. Peace.
I’ve written my wiki so that, if I end up shuffling off this mortal coil, my wife can give access to one of my brothers and they can help her by unpicking all the smart home stuff.
I’m using self hosted wiki.js and draw.io. Works a treat, and trivial to backup with everything in Postgres.
It doesn’t have to be hard - you just need to think methodically through each of your services and assess the cost of creating/storing the backup strategy you want versus the cost (in time, effort, inconvenience, etc) if you had to rebuild it from scratch.
For me, that means my photo and video library (currently Immich) and my digital records (Paperless) are backed up using a 2N+C strategy: a copy on each of 2 NASes locally, and another copy stored in the cloud.
Ditto for backups of my important homelab data. I have some important services (like Home Assistant, Node-RED, etc) that push their configs into a personal Gitlab instance each time there’s a change. So, I simply back that Gitlab instance up using the same strategy. It’s mainly raw text in files and a small database of git metadata, so it all compresses really nicely.
For other services/data that I’m less attached to, I only backup the metadata.
Say, for example, I’m hosting a media library that might replace my personal use of services that rhyme with “GetDicks” and “Slime Video”. I won’t necessarily backup the media files themselves - that would take way more space than I’m prepared to pay for. But I do backup the databases for that service that tells me what media files I had, and even the exact name of the media files when I “found” them.
In a total loss of all local data, even though the inconvenience factor would be quite high, the cost of storing backups would far outweigh that. Using the metadata I do backup, I could theoretically just set about rebuilding the media library from there. If I were hosting something like that, that is…
The whole point of this particular comment thread here is that we’re already starting to see what’s happening: people are taking back control. You’re here on Lemmy, proving that exact point.
I never said we needed Cory to tell us what comes next. Just come up with another colourfully descriptive term like he did with enshittification.
You sound like that insufferable ponytail from Good Will Hunting.
Cheers. Fixed.
We need Cory to coin a term for what comes after enshittification. Perhaps we can call it the Great Wipening, where we all stop paying to be treated like serfs and start taking back control of our content and data.
lol - I’m the same, and frequently wonder if I’m allowing tech debt to creep in. My last update took me to 8.0.3, and that was only because I built a new node and couldn’t get an older version for the architecture I wanted to run it on.
I just have a one-liner in crontab that keeps the last 7 nightly database dumps. That destination location is on one my my NASes, which rclone
s everything to my secondary NAS and an S3 bucket.
ls -tp /storage/proxmox-data/paperless/backups/*.sql.gz | grep -v '/$' | tail -n +7 | xargs -I {} rm -- {}; docker exec -t paperless-db-1 pg_dumpall -c -U paperless | gzip > /storage/proxmox-data/paperless/backups/paperless_$( date +\%Y\%m\%d )T$( date +\%H\%M\%S ).sql.gz
Yep - they introduced paid subscription tiers and put multi-user support into those: https://www.photoprism.app/editions#compare
You do need to be able to reach your public IP to be able to VPN back in. I have a static IP, so no real concerns there. But, even if I didn’t, I have a Python script that updates a Route53 DNS record for me in my own domain - a self-hosted dynamic DNS really.
You certainly can run Wireguard server in a docker container - the good folks over at Linuxserver have just the repo for you.
This may take us down a bit of a rabbit hole but, generally speaking, it comes down to how you route traffic.
My firewall has an always-on VPN connected to Mullvad. When certain servers (that I specify) connect to the outside, I use routing rules to ensure those connections go via the VPN tunnel. Those routes are only for connectivity to outside (non-LAN) addresses.
At the same time, I host a server inside that accepts incoming Wireguard client VPN connections. Once I’m connected (with my phone) to that server, my phone appears as an internal client. So the routing rules for Mullvad don’t apply - the servers are simply responding back to a LAN address.
I hope that explains it a bit better - I’m not aware of your level of networking knowledge, so I’m trying not to over-complicate just yet.
Yeah, this is why I jumped ship to Immich last year. I was donating to PP, with the understanding that donating users would get access to multi-user features when they happened.
Then they put them behind a paid recurring subscription. For self-hosted users. That move broke all the trust with me.
Mullvad is great for outbound VPN, but inbound is a PITA without port forwarding (as you’ve said). I just host a Wireguard container for inbound connectivity now, and it works flawlessly.
increasingly uncomfortable with paying forever
And paying more and more as time goes on. The thing that shits me the most is the increased prices but decreased range/quality of content. That’s clearly not a business model aimed at customer satisfaction.
Please use a personal email. My email is ‘mail’ @ ‘my actual name’. It does not get more personal than that
It’s a legit rule they’re enforcing, IMO. Generic email addresses are usually unmonitored mailboxes that don’t bounce. Easy to use if you’re spamming contact forms and stuff like that.
Instead they advised me (3 times) to create a personal email on a service like Yahoo, Outlook, Gmail, Orange, etc
I think this is more a boilerplate suggestion, to lower the barrier to entry for people. Gotta remember, those of us that host our own email and/or use our own personal domains are definitely in the minority.
Not really. Here in Australia, our supermarket duopoloy does the same thing, offering discounts per litre. At the time it all started, the supermarket chains started buying into/acquiring petrol stations and rebranding them. This has been going on for over 20 years.
Recently, both supermarkets sold off their petrol station chains, but the sales included long-standing agreements to continue to offer discounts and loyalty program points for those that shop at the associated supermarket brand.
For my wife, I have a separate library folder, mapped to just her account in Plex. It doesn’t appear in my library at all, so I don’t really care. Even better, I’ve spun up an Overseerr instance for her, so she can just search and auto-add anything she wants for herself.
Yep, agreed, but at least the government of the day can try and reign them in with legislation and regulation. Not saying they are (or will), but they’d have the option, if they had the balls to do it.
It all depends on how you want to homelab.
I was into low power homelabbing for a while - half a dozen Raspberry Pis - and it was great. But I’m an incessant tinkerer. I like to experiment with new tech all the time, and am always cloning various repos to try out new stuff. I was reaching a limit with how much I could achieve with just Docker alone, and I really wanted to virtualise my firewall/router. There were other drivers too. I wanted to cut the streaming cord, and saving that monthly spend helped justify what came next.
I bought a pair of ex enterprise servers (HP DL360s) and jumped into Proxmox. I now have an OPNsense VM for my firewall/router, and host over 40 Proxmox CTs, running (at a guess) around 60-70 different services across them.
I love it, because Proxmox gives me full separation of each service. Each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. On top of that, Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.
Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.
Let’s say there’s a new contender that competes with Immich. They offer the promise of a really cool feature no one else has thought of in a self-hosted personal photo library. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT), accessible via
photos.domain
on my home network.I can spin up a Proxmox CT from my custom Debian template, use my Ansible playbook to provision Docker and all the other bits, access it in Portainer and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.
I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my
photos.domain
hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.That’s a simplified example, but hopefully illustrates at least what I get out of using Proxmox the way I do.
The cons for me is the cost. Initial cost of hardware, and the cost of powering beefier kit like this. I’m about to invest in some decent centralised storage (been surviving with a couple li’l ARM-based NASes) to I can get true HA with my OPNsense firewall (and a few other services), so that’s more cost again.