Thought I would let you all know in case you have missed it. A few days ago Postgres support was finally merged into Sonarr dev branch (meaning 4.x version). I have already transitioned to it, so far it runs without issue
You can mostly follow the same instructions as for Radarr from here: https://wiki.servarr.com/radarr/postgres-setup
I used the following temporary docker container to do the conversion (obviously replace stuff you need to):
docker run --rm -v Route\to\sonarr.db:/sonarr.db --network=host dimitri/pgloader pgloader --debug --verbose --with “quote identifiers” --with “data only” “sqlite://sonarr.db” “postgresql://user:pwd@DB-IP/sonarr-main”
When it completed the run, it outputs a kind of table that shows if there were any errors. In my case there were 2 tables (cant remember which ones anymore) that couldn’t be inserted, so I edited those manually afterwards, so it matches the ones in the original DB.
This is very exciting, I’ve felt that SQLite has held back the performance of the *arrs for a long time so I’m excited to see this.
We’ll be able to tell with real comparative data now that both options are available, but I’d be shocked if SQLite itself is the bottleneck, especially in a low concurrency scenario (which is the case for most installations of star apps).
What are the advantages of using postgres? It makes radarr/sonarr faster?
I was curious too, so I looked into their Github issues. Apparently, SQLite doesn’t play well with k8s due to the distributed/networked nature of the environment. According to comments in the pull request, that seems to be the main driver. And apparently, Radarr already has a Postgres option.
Though, there are requests going back to 2017 to support it…just because, I guess? That person seems to just want all their data in one DB for some reason.
Basically this. I have my home stuff running in a K3S cluster, and I had to restore my Sonarr volume several times because the SQLite DB has corrupted. Transitioning to Postgres should solve this issue, and I already have quite a few other stuff in it, for example Radarr and Prowlarr
Longhorn storage, ceph block, or other distribed BLOCK storage can help assist this issue.
Yeah I’m using Longhorn. Might be that I have set it up wrong, but didn’t seem to have helped with the DB corruption issue.
If you are using longhorn in RWO mode, you shouldn’t have any issues as it passes block storage directly to the pod, via iscsi. No file/nfs storage involved.
Here is what I’m using atm. Is there a better way to do this? I’m still learning K8S :)
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sonarr-pvc spec: accessModes: - ReadWriteOnce storageClassName: longhorn resources: requests: storage: 250Mi --- [....] volumes: - name: config persistentVolumeClaim: claimName: sonarr-pvc
Looks correct to me.
SQLite doesn’t like NFS, the file locking isn’t stable/fast enough so any latency in the storage can cause data loss, corruption or just slow things down.
However SQLite to MySQL is relatively peanuts, Postgres less so…
Still it’s a nice move for those that don’t run containers on a single host with local filesystems.
I have this same question. I’m running Emby, Sonarr, Radarr and Sabnzbd on a 16GB RAM 4 year old laptop and it seems just fine performance wise. What could I be missing out on?