So Podman is an open source container engine like Docker—with "full"1 Docker compatibility. IMO Podman’s main benefit over Docker is security. But how is it more secure? Keep reading…

Docker traditionally runs a daemon as the root user, and you need to mount that daemon’s socket into various containers for them to work as intended (See: Traefik, Portainer, etc.) But if someone compromises such a container and therefore gains access to the Docker socket, it’s game over for your host. That Docker socket is the keys to the root kingdom, so to speak.

Podman doesn’t have a daemon by default, although you can run a very minimal one for Docker compatibility. And perhaps more importantly, Podman can run entirely as a non-root user.2 Non-root means if someone compromises a container and somehow manages to break out of it, they don’t get the keys to the kingdom. They only get access to your non-privileged Unix user. So like the keys to a little room that only contains the thing they already compromised.2.5 Pretty neat.

Okay, now for the annoying parts of Podman. In order to achieve this rootless, daemonless nirvana, you have to give up the convenience of Unix users in your containers being the same as the users on the host. (Or at least the same UIDs.) That’s because Podman typically3 runs as a non-root user, and most containers expect to either run as root or some other specific user.

The "solution"4 is user re-mapping. Meaning that you can configure your non-root user that Podman is running as to map into the container as the root user! Or as UID 1234. Or really any mapping you can imagine. If that makes your head spin, wait until you actually try to configure it. It’s actually not so bad on containers that expect to run as root. You just map your non-root user to the container UID 0 (root)… and Bob’s your uncle. But it can get more complicated and annoying when you have to do more involved UID and GID mappings—and then play the resultant permissions whack-a-mole on the host because your volumes are no longer accessed from a container running as host-root…

Still, it’s a pretty cool feeling the first time you run a “root” container in your completely unprivileged Unix user and everything just works. (After spending hours of swearing and Duck-Ducking to get it to that point.) At least, it was pretty cool for me. If it’s not when you do it, then Podman may not be for you.

The other big annoying thing about Podman is that because there’s no Big Bad Daemon managing everything, there are certain things you give up. Like containers actually starting on boot. You’d think that’d be a fundamental feature of a container engine in 2023, but you’d be wrong. Podman doesn’t do that. Podman adheres to the “Unix philosophy.” Meaning, briefly, if Podman doesn’t feel like doing something, then it doesn’t. And therefore expects you to use systemd for starting your containers on boot. Which is all good and well in theory, until you realize that means Podman wants you to manage your containers entirely with systemd. So… running each container with a systemd service, using those services to stop/start/manage your containers, etc.

Which, if you ask me, is totally bananasland. I don’t know about you, but I don’t want to individually manage my containers with systemd. I want to use my good old trusty Docker Compose. The good news is you can use good old trusty Docker Compose with Podman! Just run a compatibility daemon (tiny and minimal and rootless… don’t you worry) to present a Docker-like socket to Compose and boom everything works. Except your containers still don’t actually start on boot. You still need systemd for that. But if you make systemd run Docker Compose, problem solved!

This isn’t the “Podman Way” though, and any real Podman user will be happy to tell you that. The Podman Way is either the aforementioned systemd-running-the-show approach or something called Quadlet or even a Kubernetes compatibility feature. Briefly, about those: Quadlet is “just” a tighter integration between systemd and Podman so that you can declaratively define Podman containers and volumes directly in a sort of systemd service file. (Well, multiple.) It’s like Podman and Docker Compose and systemd and Windows 3.1 INI files all had a bastard love child—and it’s about as pretty as it sounds. IMO, you’d do well to stick with Docker Compose.

The Kubernetes compatibility feature lets you write Kubernetes-style configuration files and run them with Podman to start/manage your containers. It doesn’t actually use a Kubernetes cluster; it lets you pretend you’re running a big boy cluster because your command has the word “kube” in it, but in actuality you’re just running your lowly Podman containers instead. It also has the feel of being a dev toy intended for local development rather than actual production use.5 For instance, there’s no way to apply a change in-place without totally stopping and starting a container with two separate commands. What is this, 2003?

Lastly, there’s Podman Compose. It’s a third-party project (not produced by the Podman devs) that’s intended to support Docker Compose configuration files while working more “natively” with Podman. My brief experience using it (with all due respect to the devs) is that it’s total amateur hour and/or just not ready for prime time. Again, stick with Docker Compose, which works great with Podman.

Anyway, that’s all I’ve got! Use Podman if you want. Don’t use it if you don’t want. I’m not the boss of you. But you said you wanted content on Lemmy, and now you’ve got content on Lemmy. This is all your fault!

1 Where “full” is defined as: Not actually full.

2 Newer versions of Docker also have some rootless capabilities. But they’ve still got that stinky ol’ daemon.

2.5 It’s maybe not quite this simple in practice, because you’ll probably want to run multiple containers under the same Unix account unless you’re really OCD about security and/or have a hatred of the convenience of container networking.

3 You can run Podman as root and have many of the same properties as root Docker, but then what’s the point? One less daemon, I guess?

4 Where “solution” is defined as: Something that solves the problem while creating five new ones.

5 Spoiler: Red Hat’s whole positioning with Podman is like they see it is as a way for buttoned-up corporate devs to run containers locally for development while their “production” is running K8s or whatever. Personally, I don’t care how they position it as long as Podman works well to run my self-hosting shit…

  • Geronimo Wenja@agora.nop.chat
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    One of the really nice side-effects of it running rootless is that you get all the benefits of it running as an actual Unix user.

    For instance, you can set up wireguard with IP route to send all traffic from a given UID through the VPN.

    Using that, I set up one user as the single user for running all the stuff I want to have VPN’d for outgoing connections, like *arr services, with absolutely no extra work. I don’t need to configure a specific container, I don’t need to change a docker-compose etc.

    In rootful docker, I had to use a specific IP subnet to achieve the same, which was way more clunky.

      • Geronimo Wenja@agora.nop.chat
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 years ago

        Yeah sure.

        I’m going to assume you’re starting from the point of having a second linux user also set up to use rootless podman. That’s just following the same steps for setting up rootless podman as any other user, so there shouldn’t be too many problems there.

        If you have wireguard set up and running already - i.e. with Mullvad VPN or your own VPN to a VPS - you should be able to run ip link to see a wireguard network interface. Mine is called wg. I don’t use wg-quick, which means I don’t have all my traffic routing through it by default. Instead, I use a systemd unit to bring up the WG interface and set up routing.

        I’ll also assume the UID you want to forward is 1001, because that’s what I’m using. I’ll also use enp3s0 as the default network link, because that’s what mine is, but if yours is eth0, you should use that. Finally, I’ll assume that 192.168.0.0 is your standard network subnet - it’s useful to avoid routing local traffic through wireguard.

        #YOUR_STATIC_EXTERNAL_IP# should be whatever you get by calling curl ifconfig.me if you have a static IP - again, useful to avoid routing local traffic through wireguard. If you don’t have a static IP you can drop this line.

        [Unit]
        Description=Create wireguard interface
        After=network-online.target
        
        [Service]
        RemainAfterExit=yes
        ExecStart=/usr/bin/bash -c " \
                /usr/sbin/ip link add dev wg type wireguard || true; \
                /usr/bin/wg setconf wg /etc/wireguard/wg.conf || true; \
                /usr/bin/resolvectl dns wg #PREFERRED_DNS#; \
                /usr/sbin/ip -4 address add #WG_IPV4_ADDRESS#/32 dev wg || true; \
                /usr/sbin/ip -6 address add #WG_IPV6_ADDRESS#/128 dev wg || true; \
                /usr/sbin/ip link set mtu 1420 up dev wg || true; \
                /usr/sbin/ip rule add uidrange 1001-1001 table 200 || true; \
                /usr/sbin/ip route add #VPN_ENDPOINT# via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add 192.168.0.0/24 via 192.168.0.1 dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add #YOUR_STATIC_EXTERNAL_IP#/32 via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add default via #WG_IPV4_ADDRESS# dev wg table 200 || true; \
        "
        
        ExecStop=/usr/bin/bash -c " \
                /usr/sbin/ip rule del uidrange 1001-1001 table 200 || true; \
                /usr/sbin/ip route flush table 200 || true; \
                /usr/bin/wg set wg peer '#PEER_PUBLIC_KEY#' remove || true; \
                /usr/sbin/ip link del dev wg || true; \
        "
        
        [Install]
        WantedBy=multi-user.target
        

        There’s a bit to go through here, so I’ll take you through why it works. Most of it is just setting up WG to receive/send traffic. The bits that are relevant are:

                /usr/sbin/ip rule add uidrange 1001-1001 table 200 || true; \
                /usr/sbin/ip route add #VPN_ENDPOINT# via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add 192.168.0.0/24 via 192.168.0.1 dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add #YOUR_STATIC_EXTERNAL_IP#/32 via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add default via #WG_IPV4_ADDRESS# dev wg table 200 || true; \
        

        ip rule add uidrange 1001-1001 table 200 adds a new rule where requests from UID 1001 go through table 200. A table is a subset of ip routing rules that are only relevant to certain traffic.

        ip route add #VPN_ENDPOINT# ... ensures that traffic already going through the VPN - i.e. wireguard traffic - does. This is relevant for handshakes.

        ip route add 192.168.0.0/24 via 192.168.0.1 ... is just excluding local traffic, as is ip route add #YOUR_STATIC_EXTERNAL_IP

        Finally, we add ip route add default via #WG_IPV4_ADDRESS# ... which routes all traffic that didn’t match any of the above rules (local traffic, wireguard) to go to the wireguard interface. From there, WG handles all the rest, and passes returning traffic back.

        There’s going to be some individual tweaking here, but the long and short of it is, UID 1001 will have all their external traffic routed through WG. Any internal traffic between docker containers in a docker-compose should already be handled by podman pods and never reach the routing rules. Any traffic aimed at other services in the network - i.e. sonarr calling sabnzbd or transmission - will happen with a relevant local IP of the machine it’s hosted on, and so will also be skipped. Localhost is already handled by existing ip route rules, so you shouldn’t have to worry about that either.

        Hopefully that helps - sorry if it’s a bit confusing. I learned to set up my own IP routing to avoid wg-quick so that I could have greater control over the traffic flow, so this is quite a lot of my learning that I’m attempting to distill into one place.

  • lightree@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    had a knowledge sharing meeting at work recently on container security. the guy was using podman like a docker cli, but it said “this is only an emulation of docker” - is there any downsides to running podman like this? im very familiar with docker on the command line

    • loren@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 years ago

      I think calling it an emulation downplays podman. Docker and podman are both container runtimes. Docker came first and is known synonymously with containers, whereas podman is newer and attempts to fix docker’s problems.

      One outcome of this is podman chose to match docker’s cli very closely so nobody needs to learn a new cli. You can even put podman on the docker socket so “docker [command]” runs with podman.

  • magicsaifa@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    Do you think podman could replace Docker Desktop on my Dev machine? Its become so bloated…

  • markstos@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I see it as a feature that Podman containers are run via systemd. This makes their management consistent with the other systemd-managed services. Also, Docker does it own things with logs, while with systemd, the logs are managed in a consistent way as well.

    Maybe you missed podman generate systemd? Podman will generate the systemd unit files for you.

    For me, the two big benefits of podman are being able to run containers via systemd and improved security by being able to run them rootless.

    • Den Zuko@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I actually find this a huge problem. Not all distros are built around LSB, XDG, or FreeDesktop.org nor should they be since not everyone is running Linux as a workstation/PC replacement. While yes for the most part podman can be ran on the likes of Gentoo, Alpine, Arch and etc. It becomes a pain in the arse to decouple the tooling for podman away from freedesktop.org standards. Even more a pain in the arse for clustering options (e.g. podman-remote expects freedesktop.org norms, kubernetes expects docker containerd or freedesktop.org with podman, and nomad stack is just bulky vaporware).

      The really sad part of this is that podman isn’t adding much of anything new that LXC or linux namespaces outside of not needing a daemon, allowing rootless execution (again because it doesn’t need a daemon) and giving ACLs around which OCI repos could be pulled from unlike docker’s wildcard by default. It shouldn’t be hard to do linux containerization without being tied to anything other than the linux kernel.

  • deepdive@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    This makes me anxious… How do you cope with all these different technologies… I mean everything is evolving so fast and everyone wants to have his OWN way of doing things… This is messed up ! Right now IT seems a big maze of technologies and nobody seems to be in sync with each other… specially in devOP and Networking…

    I don’t know about Podman, but it’s baffling how much you need to know and understand in IT… And If every 3 years you have to relearn everything, it’s a never ending chase of dying and abandoned technologies and a wast of time :/

    Just my 2cent, nothing special !

    • Sebastian Fritz@social.pi.vaduzz.de
      cake
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      @deepdive @witten I think the more you dig the more you find you could learn, probably like every other topic with enough people on it. If you want to keep it simple you mostly still have the chance to just use a little linux machine and put everything there the “old” way. For example: I spend some 3-4 months building a kubernetes stack for my homelab, getting everything to run perfectly, then scrapped everything to rewrite it again with a bit of ansible and a single machine because it justworks

      • deepdive@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        I think the more you dig the more you find you could learn

        True, but it’s really frustrating to spend time to learn something that’s maybe going to be useless ? Just look at networking in linux distros between networkd, NetworkManager, netplan, nmtui, nmcli, networkctl, ifupdown… all working in different locations and all having their own way of doing things… This is is fucked up :/

        Imagine learning docker’s all subtilities and next year it’s deprecated in favor of another technology with his own flavors and commands… :/

        • Sebastian Fritz@social.pi.vaduzz.de
          cake
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          @deepdive Yes this can get frustrating if you let it get to you. I‘m 25 years into this and all i learned is how to look stuff up and forgot the rest. I don‘t learn technologies, i try to reduce them to some basic knowledge so i can handle them well enough. Things change all the time and i‘m too lazy to keep track of all that stuff, docker is dead. Its especially true in my actual playground at work where we are using kubernetes. Some of the most complex and fast paced stuff i ever worked with.

        • lambalicious@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          And this is why the trick is learning and focusing the technologies that stick at a “lower level” of the stack, and that have been battle-tested by years or even decades so it’s understood that they won’t just “go away”. Like eg.: learning C or Fortran instead of learning ${niche_language_of_year_20xx}. For the docker bracket for example the near equivalent would be hmmm I’d say (s)chroot.

          Then again from here to around 5 years docker will the the schroot of its tech bracket.

  • Jeena@jemmy.jeena.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    I guess this: “you run a “root” container in your completely unprivileged Unix user and everything just works” sounds like chroot. Also managing your container starts with systemd sounds pretty good to me because this is what systemd is designed for, dependencies between services, etc.