IT professional with a strong love for all things #FLOSS. Soon-to-be-retired #soccer player, #guitar player and sizeable #LEGO bricks addict.

GPG 0x736EDD9A0151287B

https://keyoxide.org/26E947141F348287FF494EAE736EDD9A0151287B

Pixelfed: @pete@pixel.cyano.at
PeerTube: @pete@tube.cyano.at

  • 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle


  • Ironically, if I would have had more services running in docker I might not have experienced such a fundamental outage. Since docker services usually tend to spin up their exclusive database engine you kind of “roll the dice” as far as data corruption goes with each docker service individually. Thing is, I don’t really believe in bleeding CPU computation cycles by running redundant database services. And since many of my services are already very long-serving they’ve been set up from source and all funneled towards a single, central and busy database server - thus, if that one experiences sudden outage (for instance power failure) all kinds of corruption and despair can arise. ;-)

    Guess I should really look into a small UPS and automated shutdown. On top of better backup management of course! Always the backups.




  • I don’t see a clear indication that you have too low RAM… RAM should be “used” fully at all times and your “cached” RAM value suggest you still have quite a bunch of RAM that could potentially be consumed by applications when they need it.
    I cannot clearly see a swap usage in the graphs - that would be an interesting value to judge the overall stability of the system with regards to fluctuating RAM usage.

    However, once you notice the problem again, right after you manage to log in, run a “dmesg -T | grep -i oom” and see if any processes get killed due to temporarily spiking RAM consumption. If you’re lucky that command might lend some insight even now still.

    Also, what if you run a “top” command for a while, what’s the value for “wa” in the second line like? “wa” stands for I/O wait and if that value is anything above 5 it might indicate that your CPU is being bottlenecked by for instance hard disk speed.





  • I would not upgrade the contract, even if you go beyond your 50mbit UPLOAD speed you won’t be sure that no buffering and hence drop in streaming will happen. Note you have a “500Mb Broadband” contract but the upload is limited to 50Mb. Asymmetric bandwidth is typical for “consumer” internet you mostly consume/download - contrary to “hosting” internet uplinks which typically are symmetric and very pricey since you are typically hosting/uploading.

    You need specialised software to make sure you can transmit big, uncompressed real-time data (which video basically is) over the internet. It’s basically what Youtube does for its users.
    It hosts arbitrary uncompressed video data you upload to it (this is your NAS - which you have now) and then displays that data to users on the web in a compressed, streaming fashion (this is what streaming software would handle - which you do not have yet).

    In your scenario issues will arise, naturally.


  • M500 broadband package boasts average download speeds of 516Mbps and average upload speeds of 52Mbps

    So, while viewing media from outside your local netwwork, i.e. via Synology QuickConnect, you’re limited to 52mbit speed.
    If you’re self-hosting upload speed matters alot unfortunately. You will surely need something that buffers / transcodes your media for viewing from the internet.