Hello, have setup my proxmox server since some weeks recently I found that LXC containers could be useful as it really separate all my services in differents containers. Since then I figured out to move my docker’s services from a vm into several LXC containers. I ran into some issues, the first one is that a lot of projects run smoother in docker and doesn’t really have a “normal” way of being package… The second thing is related to the first one, since they are not really well implemented into the OS how can I make the updates?
So I wonder how people are deploying their stuffs on LXC proxmox’s containers?
Thanks for your help!
EDIT : Tried to install docker upon debian LXC but the performances were absolutely terrible…
Your problem with docker in LXC might be if you’re using ZFS for your host storage, IIRC you need to install fuse-overlayfs on host and LXC that will be running docker. It works fine for me that way. I’m not sure if that requirement has changed recently, when I did it, the host was under PM 7.X and Debian 11 LXC.
zfs overlay / docker snapshot issue has been solved since 2021. Proxmox is also well into 8.3, 8.0 has been stable since early 2023.
Yah, its been a while since I set that up apparently.
Check out Helper Scripts. These make getting LXCs up and running super easy. This was built by a community member who recently passed away and he turned it over to the community before his passing. Its a great project!
I’ve seen this project and it’s really impressive and useful!
I’m not going to use the scripts directly but write mine mainly to learn bash and how to deploy services (without docker…), but I will 100% read and try to understand the scripts to mimic them on my one.
Rest in peace Tteck thank you for all your work ❤️
I tried docker directly on LXCs. Don’t do it man. It’s brittle, it barely works, and every proxmox update it will cause things to break. It takes forever to get working because you’re disabling things that should not be disabled, and it will only get harder.
I spent years trying to make what you’re talking about work well, and it never did.
Just install a VM and run docker in there. If you really want to make docker containers more generic, then really you may be ready to go full kubernetes.
Got performance issues. Tried with Debian and alpine nothing worked. Wouldn’t recommend docker on lxc
I have the opposite experience of this. All of my local services are a single docker container inside an LXC. I don’t like that it’s conceptually messy, but in practice it’s easy to manage. What I love about it is the simplicity of backing up or moving the entire LXC between servers.
I’ve not had any drama with things breaking across Proxmox updates. The only non-gui thing I need to do during the process is adding two lines to the LXC conf to have Tailscale work correctly.
If I remember correctly, Proxmox recommends running Docker in virtual machines instead of LXC containers. I sort of gave up on LXC containers for what I do, which is run stuff in Docker and use my server as a NAS with ZFS storage.
LXC containers are unprivileged by default, so the user IDs don’t match the conventional pattern (1000 is the main user, etc.). For a file sharing system this was a pain in the butt, because every file ended up being owned by some crazy user ID. There are ways around it which I did for some time, but moving to virtual machines instead has been super smooth.
They also don’t recommend running Docker on bare metal (Proxmox is Debian, after all). I don’t know the reasons why, but I tend to agree simply for backups. My VMs get automatically backed up on a schedule, and those backups automatically get sent to Backblaze B2 on a schedule
Basically I want to get rid of docker for the most part, and run apps directly into containers. So if one of my services corrupt or something bad happen I can recover from backup without affecting others. So how do you apply your backups when running several services in docker?
Honestly, what you’re trying to do is a great use case for docker already. I suggest learning more about how to use docker, take backups, restore from backups, etc. E.g., I have a NFSv4 share that I store all of my containerized services’ config and data files in. Any time I need to restore a precious version, it’s as easy as restoring the previous version files and starting the previous version container.
Yeah that could be an option too, but I kinda like the way how lxc works so I’m going to stick to it and write scripts to make the whole thing automated
Check out ansible for ways to automate this stuff. Highly recommended!
I thought to only cron to run weekly update
There are big differences between these two technologies. LXC is closer to a virtual machine than a docker setup. You could mimic most of a dockerfile if you wanted, but it’s not a replacement.
Most of us will use a mix og docker-hosts(vm’s running docker) and lxc. Reasons for this is that some stuff is easier to maintain in docker as it’s the preferred release channel.
You can also move vm’s to other datacenter hosts if needed - and with shared storage this is quick and mean no downtime. Lxc are stuck on the host.
Backup of docker would either be full host - for a simple and inflexible setup, or you do data and config backup (volumes mounted in docker), and rely on docker rebuilding the images.
This last type is more overhead in configuration of backup, but you can restore your containers on any host, individually
In general, I prefer unprivileged LXC to a full VM unless there’s some specific requirement that countermands that preference (like running an appliance or a non-Linux OS).
What I tend to do is create a new container for each service (unless there’s a related stack). If the service runs on Docker, I’ll install that right inside the container and manage it with
docker compose
. By installing Docker directly from get.docker.com instead of the built in packages, it pretty much works all the time.Since each service is in its own container, restoring backups is pretty service-specific. If you wanted some kind of central control plane for docker, you could check out swarm mode.
I tried to install docker with the get.docker link but the same results occurs I got really bad performance… So I wonder how to self host stuffs when using LXC containers and install services the old school way
That will be totally doable, but there’s no one way to setup every service. Some you’ll install from the repository (like nginx or HAProxy or samba). Others you’d have to clone from git (like netbox or dokuwiki). Others have entirely different methods. So, unfortunately it’ll be a lot of reading the documentation.
I prefer VMs as I have found that LXC containers are buggy and slow.
you can but from my experience containers are lightweight and useful
Lastly there is podman that some people love for container management. It’s not my cup of tea, but it might fit you.
Install on a vm though, not lxc
Humm Im going to check it, but do you think that it would be a good option to deploy all my services to lxc even if their primary release channel is docker?
That depend on how much work you have to do to keep it working.
Let’s take a fairly common webserver like Caddy. Now you can install this through docker, or natively on linux.
If the app only exists as docker image then it cones down to your ability or recreating what the dockerfile does to get it installed on your lxc container.
Fun fact: early editions of docker used lxc for its containers.
So I would have to write some scripts for installing and maintaining my installs?
(I didn’t know about your “fun fact” :) thx)
Depends on what you’d want. A dockerfile defines how the image is built. If you want to mimic this then you need scripts.
But I think you could benefit from learning how docker works from the ground up if you want to recreate docker inages in lxc.
Better use is a dedicated docker host (a vm) and run your non-docker on lxc. Treat lxc as a minimal vm for one ( or a few) services/apps per lxcontainer
I wanted to use containers to have good maintained and isolated stuff, so I think I’m going to use scripts to install and update all my stuff 😁
I just create the lxc, and if the package requires docker I begrugendly install docker on the lxc, I’ve never had performance issues with Debian lxc, I use it as my base template and it runs flawlessly (outside of ping not working unless sudo)
That being said, I don’t like installing Docker a billion times and I feel like that defeats the purpose of using an lxc in the first place, so for most small Docker containers I just put them on the same lxc since docker is going to handle all the isolation in those anyway
I don’t ZFS though I still use normal EXT4, and I use PBS for backing it up to an external drive, but I’m curious if that may be the root cause of the issues.
I update a container by doing a backup, then logging in and running apt upgrade and apt update. Some applications I update manually by downloading and unpacking the installer.
I haven’t noticed any kind of performance issues. The only application I tried which seemed to require Docker was Immich.
Performance issues when using librespeed. Running it in docker in a vm got me a perfect gig network (as intended on my network) but when using docker in lxc it goes from 200mib to 600mib and it’s absolutely not stable
What about using lxc natively? I would imagine librespeed would run better without two layers of virtual networking.
It is. Running as smooth as we can imagine the only downside is the at the cost of pain of updating and maintaining but I will go that way
I’m kind of a Linux noob but I found LXC to be much easier to manage than Docker. Some nice resources are TurnKeyLinux images and the helper scripts:
A lot of how you set up your system is just going to depend on how you want to set it up.
I run podman (like an improved version of docker) in a single LXC container for applications that are primarily packaged as docker apps. I think I have 4 or 5 applications running on that LXC.
For things that are distributed via apt, git repo, etc, I’ll either create a new LXC or use an existing LXC if it’s related to other services I’m running. For example, crowdsec is run in the same machine as nginx since those two work together and I’ll always want them both running at the same time, so there’s no reason to separate them.
I have mariadb running in its own LXC so that it can follow a different (more frequent) backup schedule than the mostly static applications that interact with it.
Anything that needs to interact directly with hardware, like Home Assistant, or I want kernel separation for, will get a full fledge VM instead of a container.
It’s all about how you want to use it.