I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.

Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

  • ssdfsdf3488sd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    I have just been round my small setup and run an OS update, took about an hour. That includes a reboot of a dedicated server with OVH.

    a pi and mini PC at home, a dedi at OVH running 2 LXC and 5 qemu vms. All deb a mix of 11 and 12.

    I spend Wednesday evenings checking what updates need installing, I get an email every week from newreleases.io with software updates and run Semaphore to check on OS updates.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    8 months ago

    Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.

    Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?

    I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.

    • MBV ⚜️@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Same same - just one update a week on Friday btw 2 yawns of the 4VMs and 10-15 services i have + quarterly backup. Does not involve much + the odd ad-hoc re-linking the reverse proxy when containers switch ips on the docker network when the VM restarts/resets

    • Footnote2669@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      +1 for docker and minimal maintenance. Only updates or new containers might break stuff. If you don’t touch it, it will be fine. Of course there might be some container specific problems. Depends what you want to run. And I’m not a devops engineer like Max 😅

  • 0110010001100010@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    8 months ago

    Typically, very little. I have ~40 containers in my Docker stack and by in large it just works. I upgrade stuff here and there as needed. I am getting ready to do a hardware refresh but again with Docker that’s pretty painless.

    Most of the time spent in my lab is trying out new things. I’ll find a new something that looks cool and go down the rabbit hole with it for a while. Then back to the status quo.

  • drkt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    If my ISP didn’t constantly break my network from their side, I’d have effectively no downtime and nearly zero maintenance. I don’t live on the bleeding edge and I don’t do anything particularly experimental and most of my containers are as minimal as possible

    I built my own x86 router with OpnSense Proxmox hypervisor Cheapo WiFi AP Thinkcentre NAS (just 1 drive, debian with Samba) Containers: Tor relay, gonic, corrade, owot, apache, backups, dns, owncast

    All of this just works if I leave it alone

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    76
    ·
    edit-2
    8 months ago

    Huge amounts of daily maintenance because I lack self control and keep changing things that were previously working.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      arrow-up
      21
      arrow-down
      1
      ·
      8 months ago

      highly recommend doing infrastructure-as-code, it makes it really easy to git commit and save a previously working state, so you can backtrack when something goes wrong

      • Kaldo@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.

          • Kaldo@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            8 months ago

            Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd

            • kernelle@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              8 months ago

              I get it, the inventory is just a list of all servers and PC you are trying to manage and the playbooks contain every step you would take if you would configure everything manually.

              I’ll be honest when you first set it up it’s daunting but that’s the thing! You only need to do it once, then you can deploy and redeploy anything you have in minutes.

              Edit: found this useful resource

        • webhead@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          I opted weekly so I could store longer time periods. If I want to go a month back I just need 4 instead of 30. At least that was the main Idea. I’ve definitely realized I fucked something up weeks ago without noticing before lol.

          • I’ve got PBS setup to keep 7 daily backups and 4 weekly backups. I used to have it retaining multiple monthly backups but realized I never need those and since I sync my backups volume to B2 it was costing me $$.

            What I need to do is shop around for a storage VM in the cloud that I could install PBS on. Then I could have more granular control over what’s synced instead the current all-or-nothing approach. I just don’t think I’m going to find something that comes in at B2 pricing and reliability.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    AP WiFi Access Point
    DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network
    DNS Domain Name Service/System
    Git Popular version control system, primarily for code
    IP Internet Protocol
    LTS Long Term Support software version
    LXC Linux Containers
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    RPi Raspberry Pi brand of SBC
    SBC Single-Board Computer
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    VPN Virtual Private Network
    VPS Virtual Private Server (opposed to shared hosting)

    [Thread #710 for this sub, first seen 24th Apr 2024, 20:25] [FAQ] [Full list] [Contact] [Source code]

  • Showroom7561@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 months ago

    Synology user running some docker containers.

    Very, very little maintenance. If there’s an update for something on docker, a simple click in the container manager, and it’s done. Yes, I can automate, but prefer to manually do these as many of the docker apps I use are in high development and I like to know what’s changing with each version.

    Synology packages update easily, and the system updates happen only once in a while. A click and reboot.

    I’ve tried to minimize things as much as possible, and to make things easier for me. One day, someone in my family will need to take over, and I don’t want to over-complicate things for them, lest they lose all our family photos, documents, etc.

    I probably spend more time keeping the fans on my actual NAS clean of dust, than I do maintain the software end of things. LOL

    edit: spelling

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    It’s bursty; I tend to do a lot of work on stuff when I do a hardware upgrade, but otherwise it’s set it and forget it for the most part. The only servers I pay any significant attention to in terms of frequent maintenance and security checks are the MTAs in the DMZ for my email. Nothing else is exposed to the internet for inbound traffic except a game server VM that’s segregated (credential-wise and network-wise) from everything else, so if it does get compromised it would be a very minimal danger to the rest of my network. Everything either has automated updates, or for servers I want more control over I manually update them when the mood strikes me or a big vulnerability that affects my software hits the news.

    TL;DR If you averaged it over a year, I maybe spend 30-60 minutes a week on self hosting maintenance tasks for 4 physical servers and about 20 VM’s.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    If you set it up really well, you’ll probably only need to invest maybe an hour or so every week or two. But it also depends on what kind of maintenance you mean. I spend a lot of time downloading things and putting them in the right place so that my TV is properly entertaining. Is that maintenance? As for updating things, I’ve set up most of that to be automatic. The stuff that’s not automatic, like pulling new docker images, I do every couple weeks. Sometimes that involves running update scripts or changing configs. Usually it’s just a couple commands.

    • ALostInquirer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Yeah, to clarify I don’t mean organizing/arranging files as a part of maintenance, moreso handling different installs/configs/updating. Sometimes since more folks come around to ask for help it can appear as if it’s all much more involved to maintain than it may otherwise be (with a mix of the right setups and knowledge to deal with any hiccups).

  • Deckweiss@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 months ago

    After my Nextcloud server just killed itself from an update and I ditched that junk software, nearly zero maintenance.

    I have

    • autoupdates on.
    • daily borgbackups to hetzner storage box.
    • auto snapshots of the servers and hetzer.
    • cloud-init scripts ready for any of the servers.
    • Xpipe for management
    • keepass as a backup for all the ssh keys and password

    And I have never used any of those … it just runs and keeps running.

    I am selfhosting

    • a website
    • a booking service for me
    • caldav server
    • forgejo
    • opengist
    • jitsi

    I need to setup some file sharing thing (Nextcloud replacement) but I am not sure what. My usecase is mainly 1) Archiving junk 2) syncing files between three devices 3) streaming my music collection

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      I moved form next cloud to seafile. The file sync is so much better than next cloud and own cloud.

      It has a normal windows client and also a mount type client (seadrive) which is also amazing for large libraries.

      I have mine setup with oAuth via Authentik and it works super well.

      • Deckweiss@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I actually moved from seafile to nextcloud, because when I have two PCs running simultaneously it would constantly have sync errors and required manually resolving them all the time. Sadly nextcloud wasn’t really better. But I am now looking for solutions that can avoid file conflicts with two simultaneous clients.

        • Lem453@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Are you changing the same files at the same time?

          I have multiple computers syncing into the same library all the time without issue.

          • Deckweiss@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            Are you changing the same files at the same time?

            Rarely. But there is some offline laptop use compounded with slow sync times. (I was running it on a raspi with external usb hdd enclosure)

            Either way, I’d like something less fragile. I’ll test seafile again sometime, thanks.

  • CatTrickery@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    8 months ago

    Since scrapping systemd, a hell of a lot less but it can occasionally be a bit of messing about when my dynamic ip gets reassigned.