I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.

But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it’s kind of getting rediciolous how many postgres instances I run on one server.

Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 months ago

    I run Proxmox with a few nodes, and each of my services are (usually) dockerized, each running in a Proxmox Linux container.

    As I like to keep things segregated as much as possible, I really only have one shared Postgres, for the stuff I don’t really care about (ie. if it goes down, I honestly don’t care about the services it takes with it, or the time it’ll take me to get them back).

    My main Postgres instances are below - there’s probably others, but these are the ones I backup religiously, and test the backups frequently.

    1. RADIUS database: for wireless auth
    2. paperless-ngx: document management indexing & data
    3. Immich: because Immich has a very specific set of Postgres requirements
    4. Shared: 2 x Sonarr, 3 x Radarr, 1 x Lidarr, a few others