I never understood how to use Docker, what makes it so special? I would really like to use it on my Rapsberry Pi 3 Model B+ to ease the setup process of selfhosting different things.

I’m currently running these things without Docker:

  • Mumble server with a Discord bridge and a music bot
  • Maubot, a plugin-based Matrix bot
  • FTP server
  • Two Discord Music bots

All of these things are running as systemd services in the background. Should I change this? A lot of the things I’m hosting offer Docker images.

It would also be great if someone could give me a quick-start guide for Docker. Thanks in advance!

  • excitingburp@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    9 months ago

    For your use case, consider it to be a packaging format (like AppImage, Flatpak, Deb, RPM, etc.) that includes all the dependencies (including services, not just libraries) for the app in question.

    Should I change this?

    If it’s not broken don’t fix it.

    Use Podman (my preferred - the SystemD approach is awesome), containerd, or Incus. Docker is a graveyard of half-finished pet projects that have no reason for existing. Podman has a Docker-compatible socket, so 100% of Docker tooling will work with it.

    • ComradeKhoumrag@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      I can add, podman was ignored in previous years at my day job because there were some reliability issues either with GPU access or networking I forget, however these issues have been resolved and we’re reimplementing it pretty much effortlessly

  • BellyPurpledGerbil@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    9 months ago

    It’s virtual machines but faster, more configurable with a considerably larger set of automation, and it consumes less computer resources than a traditional VM. Additionally, in software development it helps solve a problem summarized as “works on my machine.” A lot of traditional server creation and management relied on systems that need to be set up perfectly identical every deployment to prevent dumb defects based on whose machine was used to write it on. With Docker, it’s stupid easy to copy the automated configuration from “my machine” to “your machine.” Now everyone, including the production systems, are running from “my machine.” That’s kind of a big deal, even if it could be done in other ways naturally on Linux operating systems. They don’t have the ease of use or the same shareability.

    What you’re doing is perfectly expected. That’s a great way of getting around using Docker. You aren’t forced into using it. It’s just easier for most people

    • modeler@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      This is exactly the answer.

      I’d just expand on one thing: many systems have multiple apps that need to run at the same time. Each app has its own dependencies, sometimes requiring a specific version of a library.

      In this situation, it’s very easy for one app to need v1 of MyCleverLibrary (and fails with v2) and another needs v2 (and fails with v1). And then at the next OS update, the distro updates to v2.5 and breaks everything.

      In this situation, before containers, you will be stuck, or have some difficult workrounds including different LD_LIBRARY_PATH settings that then break at the next update.

      Using containers, each app has its own libraries at the correct and tested versions. These subtle interdependencies are eliminated and packages ‘just work’.

      • TDCN@feddit.dk
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        I can also add that if you want to run multiple programs that each have a web interface it’s easy to direct each interface to the port you want instead of having to go through various config files that are different for each program or worst case having to change a hardcoded port in some software. With docker you have the same easy config options for each service you want to run. Same with storage paths. Various software stores their files at seemingly random places. With docker you just map a folder and all you files are stored there without any further configs.

      • BellyPurpledGerbil@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        9 months ago

        I approve of this expanded answer. I may have been too ELI5 in my post.

        If the OP has read this far, I’m not telling you to use docker, but you could consider it if you want to store all of your services and their configurations in a backup somewhere on your network so if you have to set up a new raspberry pi for any reason, now it’s a simple sequence of docker commands (or one docker-compose command) to get back up and running. You won’t need to remember how to reinstall all of the dependencies.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 months ago

    There have been some great answers on this so far, but I want to highlight my favourite part of Docker: the disposability.

    When you have a running Docker container, you can hop in, fuck about with files, break stuff as you try to figure something out, and then kill the container and all of the mess you’ve created is gone. Now tweak your config and spin up a fresh one exactly the way you need it.

    You’ve been running a service for 6 months and there’s a new upgrade. Delete your instance and just start up the new one. Worried that there might be some cruft left over from before? Don’t be! Every new instance is a clean slate. Regular, reproducible deployments are the norm now.

    As a developer it’s even better: the thing you develop locally is identical to the thing that’s built, tested, and deployed in CI.

    I <3 Docker!

      • DecentM@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 months ago

        The most popular way of configuring containers are by using environment variables that live outside the container. But for apps that use files to store configuration, you can designate directories on your host that will be available inside the container (called “volumes” in Docker land). It’s also possible to link multiple containers together, so you can have a database container running alongside the app.

        • electric_nan@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          If you have all of that set up then, what benefit is there to blowing away your container and spinning up a ‘fresh’ one? I’ve never been able to wrap my head around docker, and I think this is a big part of it.

          • DecentM@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            9 months ago

            There’s a lot more to an application than its configuration. It may require certain specific system libraries, need a certain way of starting up, or a whole host of other special things. With a container, the app dev can precreate a perfect environment for their program and save you LOADS of hassle trying to set it up.

            The benefit of all this is that you can know exactly where application state is stored, know that you’re running the app in it’s right environment, and it becomes turbo easy to install updates, or roll back if needed.

            Totally spin up a VM, install docker on it, and deploy 2-3 web apps. You’ll notice that you use the same way of configuring them, starting and stopping them, and you might not want to look back ;)

            • electric_nan@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              I’ve played with it a bit. I think I was using something called DockStarter and Portainer. Like I said though, I could never quite grasp what was going on. Now for my home webapps I use Yunohost, and for my media server I use Swizzin CE. I’ve found these to be a lot easier, but I will try Docker again sometime.

  • kevincox@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    9 months ago

    I feel that a lot of people here are missing the point. Docker is popular for selfhosted services for a few main reasons:

    1. It is one package that can be used on any distribution (or even OS with a Linux VM).
    2. The package contains all dependencies required to run the software so it is pretty reliable.
    3. It provides some basic sandboxing against non-malicious services. Basically the service can’t scribble all over your filesystem. It can only write to specific directories that you have given it access to (via volumes) other than by exploiting security vulnerabilities.
    4. The volume system also makes it very obvious what data is important and needs to be backed up or similar, you have a short list.

    Docker also has lots of downsides. I would generally say that if your distribution packages software I would prefer the distribution’s package over the docker image. A good distribution package will also solve all of these problems. The main issue you will see with distribution packages is a longer delay before new versions are made available.

    What Docker completely dominates was previous cross-distribution packaging options which typically took one of the previous strategies.

    1. Self-contained compiled tarball. Run the program inside as your user. It probably puts its data in the extracted directory, maybe. How do you upgrade? Extract and copy a data directory? Self-update? Code is mutable and mixed with data, gross.
    2. Install script. Probably runs as root. Makes who-knows what changes to your system. Where is the data, is the service running? Will it auto-start on boot. Hope that install script supports your distro.
    3. Source tarball. Figure out the dependencies. Hope they don’t conflict with the versions your distro has. Set up users and setup scripts yourself. Hope the build doesn’t take too long.
    • CyberSeeker@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Sorry if I’m about 10 years behind Linux development, but how does Docker compare with the latest FlatPak trend in application distribution? How you have described it sounds somewhat similar, outside of also getting segmented access to data and networks.

      • kevincox@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        9 months ago

        For desktop apps Flatpak is almost certainly a better option than Docker. Flatpak uses the same core concepts as Docker but Flatpak is more suited for distributing graphical apps.

        1. Built in support for sharing graphics drivers, display server connections, fonts and themes.
        2. Most Flatpaks use common base images. Not only will this save disk space if you have lots of, for example GNOME, applications as they will share the same base but it also means that you can ship security updates for common libraries separately from application updates. (Although locked insecure libraries is still a problem in general, it is just improved over the docker case.)
        3. Better desktop integration via the use of “portals” that allow requesting specific things (screenshot, open file, save file, …) without full access to the user’s system.
        4. Configuration UIs that are optimized for the desktop usecase. Graphically tools for install, uninstall, manage permissions, …

        Generally I would still default to my distro’s packages where possible, but if they are unsuitable for whatever reason (not available, too old, …) then a Flatpak is a great option.

      • towerful@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        Docker is to servers, as flatpak is to desktop apps.
        I would probably run away if i saw flatpak on a headless server

        • matcha_addict@lemy.lol
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          9 months ago

          Flatpak has better security features than docker. While its true it’s not designed with server apps in mind, it is possible to use its underlying “bubblewrap” to create isolated environments. Maybe in the future, tooling will improve its features and bridge the gap.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    9 months ago

    IMHO with docker and containerization in general you are trading drive space for consistency and relative simplicity.

    a hypothetical:
    You set up your mumble server and it requires the leftpad 3.7 package to run. you install it and everything is fine.
    Now you install your ftp server but it needs leftpad 5.5. what do you do? hope the function that mumble uses in 3.7 still exists in 5.5? run each app in its own venv?

    Docker and containerization resolve this by running each app in its own mini virtual machine. A container running mumble and leftpad 3.7 can coexist on host that also has a container running a ftp server with leftpad 5.5.

    Here is a good video on what hole docker and containerization looks to fill
    https://www.youtube.com/watch?v=Nm1tfmZDqo8

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      9 months ago

      Docker containers aren’t running in a virtual machine. They’re running what amounts to a fancy chroot jail… It’s just an isolated environment that takes advantage of several kernel security features to make software running inside the environment think everything is normal despite being locked down.

      This is a very important distinction because it means that docker containers are very light weight compared to a VM. They use but a fraction of the resources a VM would and can be brought up and down in milliseconds since there’s no hardware to emulate.

      • notfromhere@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        FYI docker engine can use different runtimes and there is are lightweight vm runtimes like kata or firecracker. I hope one day docker will default with that technology as it would be better for the overall security of containers.

      • uzay@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        9 months ago

        To put it in simpler terms, I’d say that containers virtualise only the operating system rather than the whole underlying machine.

        I guess not then.

        • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Not exactly IMO, as containers themselves can simultaneously access devices and filesystems from the host system natively (such as VAAPI devices used for hardware encoding & decoding) or even the docker socket to control the host system’s Docker daemon.

          They also can launch directly into a program you specify, bypassing any kind of init system requirement.

          OC’s suggestion of a chroot jail is the closest explanation I can think of too, if things were to be simplified

        • Atemu@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 months ago

          The operating system is explicitly not virtualised with containers.

          What you’ve described is closer to paravirtualisation where it’s still a separate operating system in the guest but the hardware doesn’t pretend to be physical anymore and is explicitly a software interface.

        • pztrn@bin.pztrn.name
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 months ago

          It virtualises only parts of operating system (namely processes and network namespaces with ability to passthru devices and mount points). It is still using host kernel, for example.

          • loudwhisper@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            I wouldn’t say that namespaces are virtualization either. Container don’t virtualize anything, namespaces are all inherited from the root namespaces and therefore completely visible from the host (with the right privileges). It’s just a completely different technology.

            • pztrn@bin.pztrn.name
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              I never said that it is a virtualization. Yet for easy understanding I named created namespaces “virtualized”. Here I mean “virtualized” = “isolated”. Systemd able to do that with every process btw.

              Also, some “smart individuals” called comtainerization as type 3 hypervisors, that makes me laugh so hard :)

            • steakmeoutt@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              9 months ago

              The word you’re all looking for is sandboxing. That’s what containers are - sandboxes. And while they a different approach to VMs they do rely on some similar principals.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Docker and containerization resolve this by running each app in its own mini virtual machine

      While what you’ve written is technically wrong, I get why you did the comparison that way. Now there are tons of other containerization solutions that can exactly what you’re describing without the dark side of Docker.

    • MaximilianKohler@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Doesn’t that mean that docker containers use up much more resources since you’re installing numerous instances & versions of each program like mumble and leftpad?

      • slazer2au@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Kinda, but it depends on the size of the dependencies, with drive space bing so cheap these days do you really worry about 50Mb of storage being wasted on 4 different versions of glib or leftpad

    • loudwhisper@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      I would also add security, or at least accessible security. Containers provide a number of isolation features out-of-the-box or extremely easy to configure which other systems require way more effort to achieve, or can’t achieve.

      Ironically, after some conversation on the topic here on Lemmy I compiled a blog post about it.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        Tbf, systemd also makes it relatively easy to sandbox processes. But it’s opt-in, while for containers it’s opt-out.

        • loudwhisper@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Yeah, and it also requires quite many options, some with harder-to-predict outcomes. For example RootDirectory can be used to effectively chroot the process, but that carries implications such as the application not having access to CA certificates anymore, which in general in containers is a solved problem.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    CA (SSL) Certificate Authority
    DNS Domain Name Service/System
    Git Popular version control system, primarily for code
    HA Home Assistant automation software
    ~ High Availability
    IP Internet Protocol
    LXC Linux Containers
    NAS Network-Attached Storage
    SBC Single-Board Computer
    SSD Solid State Drive mass storage
    SSL Secure Sockets Layer, for transparent encryption

    9 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

    [Thread #592 for this sub, first seen 11th Mar 2024, 17:25] [FAQ] [Full list] [Contact] [Source code]

  • marcos@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    Try to run something that requires php7 and something else that requires php8 on the same web server; or python 2 and python 3.

    You actually can, but it’s not pretty.

    (The thing about a declarative setup isn’t much of a difference, you can do it for any popular Linux distro.)

    • MaximilianKohler@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Doesn’t that mean that docker containers use up much more resources since you’re installing numerous instances & versions of each program like PHP?

      • marcos@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Oh, sure, the bloat on your images requires resources from the host.

        There is the option of sharing things. But, obviously that conflicts a bit with maintaining your environments isolated.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    One of the the main reasons why docker and kubernetes take off is they standardized the deployment process. Say, you have 20 services running on your servers. It’s much easier to maintain those 20 services as a set of yaml files that follow certain standard than 20 config files each with different format. If you only have a couple of services, the advantage is probably not apparent. But as you add more and more services, you’ll start to appreciate it.

    • doeknius_gloek@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Yep, I couldn’t run half of the services in my homelab if they weren’t containerized. Running random, complex installation scripts and maintaining multiple services installed side-by-side would be a nightmare.

  • matcha_addict@lemy.lol
    link
    fedilink
    English
    arrow-up
    33
    ·
    edit-2
    9 months ago

    This blog post explains it well:

    https://cosmicbyt.es/posts/demistifying-containers-part-1/

    Essentially, containers are means of creating environments in which you can run software, and those environments are:

    • isolated, which makes it a very controlled environment. Much harder to run into errors
    • reproducible: we have tools that reproduce the same container from an image file
    • easy to distribute: just have the container image.
    • little to no compromises on performance (at least on Linux)

    It is essentially a way for you to run a program without having to worry how to set up the environment, why it didn’t work as expected, what dependencies you’re missing, etc.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    9 months ago

    The thing with Docker is that people don’t want to learn how to use Linux and are buying into an overhyped solution that makes their life easier without understanding the long term consequences. Most of the pro-Docker arguments go around security and that’s mostly BS because 1) systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

    Companies such as Microsoft and GitHub are all about re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer. We now have a generation that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker BS or isn’t a 3rd party cloud xyz deploy-from-github service.

    Before anyone comments that Docker isn’t totally proprietary and there’s Podman consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

    Docker may make development and deployment very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did. There’s LOT of money into transitioning everyone to the “deploy-from-github-to-cloud-x-with-hooks” model so those companies will keep pushing for it.

    Note that technologies such as Docker keep commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

    • loudwhisper@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Most of the pro-Docker arguments go around security

      Actually Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere. Security is a side-effect and definitely not the reason why containers picked-up.

      systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

      Yes, and it’s much harder to achieve the same. In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle. I made an example on my blog where I decided to run blocky in Systemd and not in Docker. It’s just less convenient and accessible, harder to debug and also relies on each individual user to do it, while with containers a lot gets packed into the image and therefore harder to mess up.

      Docker isn’t totally proprietary

      There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

      I will avoid comment what looks like a rant, but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendors, while containers use only native OS features and are therefore a step towards openness.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        9 months ago

        Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere

        I don’t disagree with you, but that also shows that most modern software is poorly written. Usually a bunch of solutions that hardly work and nobody is able to reproduce their setup in a quick, sane and secure way.

        There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

        Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

        In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle.

        Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.

        but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendor

        Completely proprietary… like QEMU/libvirt? :P

        • towerful@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          I use ghcr, i have no issues pulling images from amazon ECR or wherever.
          Docker got there first with the adoption and marketing.

          Automation tools like ansible and terraform have existed for ages, and are great for running things without containers.
          OCI just makes it a hell of a lot easier and portable

        • loudwhisper@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          but that also shows that most modern software is poorly written

          Does it? I mean, this is especially annoying with old software, maybe dynamically linked or PHP, or stuff like that. Modern tools (go, rust) don’t actually even have this problem. Dependencies are annoying in general, I don’t think it’s a property of modern software.

          Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

          Who are these people? There are tons of registries that people use, github has its own, quay.io, etc. You also can simply publish Dockerfiles and people can build themselves. Ofc Docker has the edge because it was the first mainstream tool, and it’s still a great choice for single machine deployments, but it’s far from the only used. Kubernetes abandoned Docker as default runtime for years, for example… who are you referring to?

          Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.

          But Systemd also uses unshare, chroot, etc. They are at the same level of abstraction. Docker (and container runtimes) are simply specialized tools, while systemd is not. Why wouldn’t I use a tool that is meant for this when it’s available. I suppose bubblewrap does something similar too (used by Flatpak), and I am sure there are more.

          Completely proprietary… like QEMU/libvirt? :P

          Right, because organizations generally run QEMU, not VMware, Nutanix and another handful of proprietary platforms… :)

      • towerful@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        but I want to simply remind you that containers are the successor of VMs

        Successor implies replacement. I think containers are another tool in the toolkit of servers/hosting, but not a replacement for VMs

        • loudwhisper@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Well, I did not mean replacement (in fact, most orgs run in clouds which uses VMs) but I meant that a lot of orgs moved from VMs as the way to slice their compute to containers/kubernetes. Often the technologies are combined, so you are right.

    • matcha_addict@lemy.lol
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      They’re similar under the good, but flatpak is optimized for desktop use. Docker targets server applications.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    One benefit that might be overlooked here is that as long as you don’t use any Docker Volumes (and instead bind mount a local directory) and you’re using Docker Compose, you can migrate a whole service, tech stack and everything, to a new machine super easily. I just did this with a Minecraft server that outgrew the machine it was on. Just tar the whole directory, copy it to the new host, untar, and docker compose up -d.

    • AlexPewMaster@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      This docker compose up -d thing is something I don’t understand at all. What exactly does it do? A lot of README.md files from git repos include this command for Docker deployment. And another question: How can you automatically start the Docker container? Do you need a systemd service to run docker compose up -d?

      • Fisch@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        You just need the docker and docker-compose packages. You make a docker-compose.yml file and there you define all settings for the container (image, ports, volumes, …). Then you run docker-compose up -d in the directory where that file is located and it will automatically create the docker container and run it with the settings you defined. If you make changes to the file and run the command again, it will update the container to use the new settings. In this command docker-compose is just the software that allows you to do all this with the docker-compose.yml file, up means it’s bringing the container up (which means starting it) and -d is for detached, so it does that in the background (it will still tell you in the terminal what it’s doing while creating the container).

  • BCsven@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    Install Portainer, it helps you get used to managing docker images and containers before going full command line.

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      I actually prefer dockge, I only have a few containers and its a lot simpler while still able to do all the basics of docker management. Portainer was overkill for me.