• AnonymousLlama@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Along with what the other old mates have said. Using containers is super handy in web development. You can use Docker Desktop (windows, mac, linux etc) and set up containers, one for PHP, another for postgres (database) and have them all interconnect.

      The benefit I’ve found is that once someone has set up a docker file (the thing that says how it all builds and interconnects) you can launch it with a single click.

      The Kbin project itself has a docker setup to get up and running, handling most of the connection between the database and Symfony (the framework of the website)

    • Impossible@partizle.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Docker is a method of virtualization on an application level.

      You host a container platform on your computer and this allows you to utilise containers.

      Example of Containers are applications such as

      Plex Sonarr Pihole

      Each container is a ‘complete’ platform with all the dependencies included - your container platform supplies Disk/CPU/Ram and network for the Container to function.

      Containers can be based on operating systems that are different to your computer.

      Using containers mean you have a very easy way of trying an Application without installing to your native operating system.

      Let us know if you have a NAS /PC /Server and we can tailor the answer further.

      • vampatori@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        Containers can be based on operating systems that are different to your computer.

        Containers utilise the host’s kernel - which is why there needs to be some hoops to run Linux container on Windows (VM/WSL).

        That’s one of the most key differences between VMs and containers. VMs virtualise all the hardware, so you can have a totally different guest and host operating systems; whereas because a container is using the host kernel, it must use the same kind of operating system and accesses the host’s hardware through the kernel.

        The big advantage of that approach, over VMs, is that containers are much more lightweight and performant because they don’t have a virtual kernel/hardware/etc. I find its best to think of them as a process wrapper, kind of like chroot for a specific application - you’re just giving the application you’re running a box to run in - but the host OS is still doing the heavy lifting.

        • Zereaux@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Interesting. Thanks both for the replies. I do run Plex, Sonaar, Radaar, etc but I just have them all installed in a straightforward way on a windows pc. If I’m okay with the software (not just trying it out) am I missing out by not using dockers?

          • vampatori@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            If I’m okay with the software (not just trying it out) am I missing out by not using dockers?

            No, I think in your use case you’re good. A lot of the key features of containers, such as immutability, reproduceability, scaling, portability, etc. don’t really apply to your use case.

            If you reach a point where you find you want a stand-alone linux server, or an auto-reconfiguring reverse proxy to map domains to your services, or something like that - then it starts to have some additional benefit and I’d recommend it.

            In fact, using native builds of this software on Windows is probably much more performant.

          • socphoenix@midwest.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            not really no. Docker can make it easier to set up weird network configurations or in some cases make updating things easier but if what you have is working and fits your needs there’s not really anything you’re missing out on.

            • i_am_not_a_robot@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              In a weird way, for most people Docker is just used to compensate for problems that Windows used to have but doesn’t really anymore. Often on Linux when you install something it gets dumped into a shared prefix like /usr or /usr/local or has dependencies on libraries that are installed into /lib or /usr/lib or /usr/local/lib. If the libraries are versioned correctly, it’s usually not a big problem that the applications are sharing components, but sometimes shared files conflict with each other and you end up with something similar to the old Windows DLL hell, especially if applications are not officially packaged for the distro you’re running. Using a container image avoids this because only the correct libraries and support files are in the image, and they’re in a separate location so they can easily be swapped without impacting other applications that might be using similar files.

              However, on Windows these days it’s highly discouraged for programs to install things into common directories like that. Usually when you install an application it installs everything it needs into its own directory. For the things that Microsoft puts into shared directories, there’s a system called SXS that’s supposed to prevent conflicts with incompatible versions. It’s not perfect because there are still cases where you can get interactions, but it’s pretty uncommon now.

              • socphoenix@midwest.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                That’s a good point. I run all my server type apps on FreeBSD which avoids dependency issues by versioning things that aren’t compatible. for instance you can install php7.4 or 8.1 as packages php74/php81 and different things that require those are compiled to look for the right library. I kinda wish linux would consider the same thing but idk if individual distro maintainers would want that kind of extra work

    • Fenzik@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s not NAS specific, it’s platform independent - that’s the whole point. You have an application you want to run, and you package it all up into a docker image which contains not only the application but it’s dependencies and their dependencies all the way down to the OS. That way you don’t need to worry about installing things (because the container already has the application installed), all you have to do is allocate some resources to the container and it’s guaranteed* to work

      *nothing is ever as simple as it first appears

      One area where this is really helpful is in horizontally scaling workloads like web servers. If you get a bunch more traffic, you just spin up a bunch more containers from your server image on whatever hardware you have laying around, and route some of the traffic to the new servers. All the servers are guaranteed to be the same so it doesn’t matter which one serves the request. This is the thing kubernetes is very good at.

      Edit: see caveats below

      • i_am_not_a_robot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Docker is not platform independent. The OS is not included in the image, and the executables in the image rely on the host system having a compatible kernel and system architecture. Only userspace components like software libraries and, unfortunately, CA certificates are included in the image.

        It primarily supports x86_64 Linux systems. If you want to run on ARM, you need special images or you need a CPU emulator. If you want to run on Mac OS or Windows, you usually need a VM. There are Windows Docker containers, but Docker as a technology isn’t really applicable to Windows because of the dirty separation between userspace and the kernel (if you’ve ever tried to run Docker on Windows Server without Hyper-V support, this is why it’s so difficult to get it to work and it stops working after Windows updates).

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Let me introduce you to Docker buildx, you don’t know her, she’s from Canada. But seriously, multi-platform images are a thing, and I need to figure out how to do them

          • i_am_not_a_robot@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You can create multi-platform images (actually manifests of single-platform images) without buildx, and buildx isn’t enough to create multiplatform images. In its default configuration, buildx can usually build images for different processor architectures but requires CPU emulation to do it. If the Dockerfile compiles code, it runs a compiler under emulation instead of cross compiling. To do it without CPU emulation involves configuring builders running on the other platforms and connecting to them over the network.

            I don’t know if it supports building images for multiple operating systems, but it probably doesn’t matter. I’ve only ever seen container images for Linux and Windows, and it’s virtually impossible to write a single Dockerfile that works for both of those and produces a useful image. The multiplatform images that support Linux and Windows are probably always created using the manifest manipulation commands instead of buildx.

        • Fenzik@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This is true. However many big maintained public images are multi-arch so down for ARM, and the fact that Docker runs in a VM on Windows and OSX when you install it doesn’t matter to most people. On Linux indeed it reuses the host’s kernel (which is why containers can be a lot lighter than VMs)