Hello peoples,

I am looking for tips on how to make my self-hosted setup as safe as possible.

Some background: I started self-hosting some services about a year ago, using an old lenovo thin client. It’s plenty powerful for what I’m asking it to do, and it’s not too loud. Hardware wise I am not expecting to change things up any time soon.

I am not expecting anyone to take the time to baby me through the process, I will be more than happy with some links to good articles and the like. My main problem is that there’s so much information out there, I just don’t know where to start or what to trust.

Anyways, thank you for reading.

N

  • Matej@matejc.com
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    11 months ago

    Software:

    • firewall, no inbound and do outbound restrictions
    • use immutable OS
    • full disk encryption (keep in mind that in many setups you will need to be beside the computer after restart)

    Hardware:

    • put it in the trusted datacenter (home stuff is not safe from teenagers and people that need computer’s electrical socket for a vacuum cleaner)
    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      edit-2
      11 months ago

      use immutable OS

      Just no.

      Immutable distros are all about making thing that were easy into complex, “locked down”, “inflexible”, bullshit to justify jobs and payed tech stacks and a soon to be released property solution.

      Security isn’t even a valid argument for immutable distros because we had Ansible, containers, ZFS and BTRFS that provided all the required immutability needed already, but someone decided that is is time to transform proven development techniques in the hopes of eventually selling some orchestration and/or other proprietary repository / platform / BS like Docker / Kubernetes does.

      “Oh but there are truly open-source immutable distros” … true, but this hype is much like Docker and it will invariably and inevitably lead people down a path that will then require some proprietary solution or dependency somewhere that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. As with CentOS’s fiasco or Docker it doesn’t really matter if there are truly open-source and open ecosystems of immutable distributions because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.

      We had good examples of immutable distributions and architectures before this new hype. We’ve been using MIPS routers and/or IOT devices that are usually immutable and there are also reasons why people are moving away from those towards more mutable ARM architectures.

      • Guenther_Amanita@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Dude… It’s the hundredth time you’ve posted this copypasta.
        Image-based OSs aren’t locked down and also don’t depend on proprietary services.

        You can just read my post I made about immutable systems, maybe we can discuss it there.

        But, I wouldn’t choose a image based OS right now too for servers. At least yet.
        I’m just afraid about compatibility, because many installers and services might rely on access to the root file system for now. Debian is right now the best choice as server OS, but that might change in the future.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          11 months ago

          Image-based OSs aren’t locked down and also don’t depend on proprietary services.

          I’m sure we’ve been over this. It’s just a question of time until those solutions become unmanageable at scale and for the more professional users and then a magic proprietary solution that fixes it all will appear. Exactly the same that happened with Docker/DockerHub/Kubernetes.

          I’m just afraid about compatibility, because many installers and services might rely on access to the root file system for now. Debian is right now the best choice as server OS, but that might change in the future.

          Use BRTFS/ZFS snapshots to rollback if anything breaks. Either way you can use LXD/LXC as containers to run your stuff that are easy to setup and will resolve the root filesystem issue.