You would expose the port to your host which makes the db acessible by anything running on the host, docker or native. Something like
`port
- 5432:5432 `
But I would recommend running a dedicated db for each service. At least that’s what I do.
- Simpler setup and therefore less error-prone
- More secure because the db’s don’t need to be exposed
- Easier to manage because I can independently upgrade, backup, move
Isn’t the point about containers that you keep things which depend on each other together, eliminating dependencies? A single db would be a unecessary dependency in my view. What if one service requires a new version of MySQL, and another one does not yet support the new version?
I also run all my databases via a bind mount
`volume
- ./data:/etc/postgres/data…`
and each service in it’s own directory. E.g. /opt/docker/nextcloud
That way I have everything which makes up a service contained in one folder. Easy to backup/restore, easy to move, and not the least, clean.
I would just have Postgres running statically on some solid hardware. It’s easy to configure permissions and connections, too.
Not too hard to set up streaming replication for a hot standby if you wanna be sure (or offload some reads).
I use Postgres btw
You could set up a docker with an exposed port for connections to the MySQL database server and run 20 databases inside it, that will come with its own risks fyi. You may have MySQL version mismatches to start with, you may have concurrent connections trying to use the same internal port, you may have a number of different situations where reads or writes take a much longer time due to other services wanting data.
Not recommended as one service could screw up all the tables. Each service should have its own DB and then use persistence to a SSD or a NFS
Virtually all database solutions support limiting users to specific databases/schemas/tables/whatever you need.
Persistence to NFS is also generally bad advice for most databases for performance reasons.
Sure, set up a mysql server docker-compose, and create a new network that’s shared between all other docker-compose projects that need access to the DB by adding it as an existing network to whichever service needs to talk to the DB.
Downside is a centralized point of failure, and more complexity when setting up new services as you have to go create a new user/pass in the DB for each one.
You’ll want to learn some database administration before you attempt this. Simpler to just give them all their own instance.
I have 1 postgres container with 7 DBs created in it and 1 mysql container with 1 DB in it, (bookstack…) so it is definitely possible and would make sure persistence and storage is properly configured as another user has gone through.
Im using one mariadb (lsio) for multiple services, but I wouldnt do it like that if I started over. I just didnt see recommendations on time 😄, but no issues so far.
You can install phpmyadmin (also docker) for db management with nice gui. You can open existing db or create new one quite easy
Yes, you can.
Without going into specifics, you need to share the network of your DB stack with the stack of the client containers.
You can do that, but there are a couple of things to keep in mind.
Different apps may only be compatible with certain database products and versions. I could be a real pain if you have to spin up a new version of a database and migrate just for one service that updated their dependencies or have to keep an old database version around for legacy software.
If you stop using a service then it’s data is still in the database. This will get bloated after a while. If the database is only for one service then wiping it out when you are done isn’t a big deal. However, if you use a shared database then you likely have to go in and remove schemas, tables, and users manually; praying you don’t mess something up for another service.
When each service has its own database moving it to another instance is as easy as copying all the files. If the database is shared then you need to make sure the database connection is exposed to all the systems that are trying to connect to it. If it’s all local then that’s pretty safe, but if you have services on different cloud providers then you have to be more careful to not expose your database to the world.
Single use databases don’t typically consume a lot of resources unless the service using it is massive. It typically is easier to allow each service to have its own database.
Not only it can be done but I think it is the way to go. You then have to manage permissions and backup only on one database, and the performance improves given that you let postgresql manage it’s own IO. It goes without saying that you should use postgresql instead of mysql
Yes and usually sane developers allow you to specify external mysql instances.
Delving into the politics many view docker as a standalone system that shouldn’t need external items so they leave their db internal. Which goes against having configurations external.
But then you have external databases that you need to know enough to setup ahead of time to allow the docker to use and now I gave myself a headache
In many cases it should be fine to point them all at the same server. You’ll just need to make sure there aren’t any collisions between schema/table names.
Yes and no. Theoretically, yes you may use one database for all services. Historically it was even an old assumption and some services still ask you for a table prefix to allow various services to use the same db without clashes. However it’s no more recommended and you should absolutely avoid going this way.
But nothing blocks you to have several separate databases on the same server. Here your mysql docker container play the role of a database server. You can use the same container for different services using different databases. Create one specific user/database per service inside your container (you may wrap the official mysql image inside your own image to script its creation), then add it in your docker-compose file and make all your services depend on the same service.
Sure, I always do like that.
I do a database docker compose with all the databases so they will get named database-mysql database-mariadb and so on and then i share a docker network (example ’docker network create mariadb’) between the containers that need them. Now they can access directly at mariadb:3306 (in docker compose you can access by the name of the container if you are in the same network)
I hate when the only option those “all in one” containers with included database, which I need to backup separately then