cross-posted from: https://lemmy.world/post/2357075
It seems that self hosting, for oneself, a federated service, like Lemmy, would only serve to increase the traffic in the network, and not actually serve the purpose of load balancing between servers.
As far as I understand it, the way federation is supposed to work is that the servers cache all the content locally to then serve to the people that are registered to that server. In doing so, the servers only have to transmit a minimal amount of data between themselves which lowers the overhead for small servers – this then means that a small server doesn’t get overwhelmed by a ton of people requesting from it. Now, if, instead, you have everyone self hosting their own server, you go right back to having everyone sending a ton of requests to small servers, thereby overwhelming them. It seems that it’s really only beneficial to the network if you have, say, hundreds of medium sized servers instead of, say, thousands, of very small servers. While there is the resilience factor, the overhead of the network would be rather overwhelming.
Perhaps one possibility of fixing this is to use some form of load balancer like IPFS to distribute the requests more evenly, but I am no where even remotely close to being knowledgeable enough in that to say anything definitively.
If it worked like torrenting where you have seeds, etc, it’d scale almost infinitely. I don’t think we should change to fit the algorithm. We should change the algorithm to make it scale.
If it worked like torrenting
That would require at least one instance having 100% of the fediverse, would it not?
Every instance just needs to store the communities they use, just like now. But once cached, any other instance could grab those messages from any of those instances. It’d be a peer to peer sort of organization.
I can think of lots of caveats regarding freshness of content and trust and ensuring the tree of instances is auto organized to minimize depth. Maybe for trust you could have signatures for all content signed using keys that every instance could pull from the original instance just once every now and then.
Upvotes and responses would just travel up the tree in the reverse trip from the way content came down.
But, I think it’s similar to other things that already exist. These problems seem solvable.
The biggest issue I see is edits percolating through the network slowly or not at all, but I’m sure that’s not an insurmountable problem
Why is that?
I’m not super familiar with torrenting protocols, but would have naively assumed that the very fact that subs have a single source of truth (e.g. selfhosted@lemmy.world is hosted on lemmy.world in its entirety, and then only cached on other lemmy instances) would be enough?
I guess we’d need to federate the sub list, we wouldn’t want a central source of truth for that, but that bit isn’t any different to what we have currently AFAIK
When you initially start a torrent, you define what “100%” is - all of the files. When you update a torrent, you need all of the updates. The beauty of a federated network is that the network can persist without all of it being available.
I run my own instance. If every other server on the planet crapped out overnight, my instance would still be operable (with whatever content from the federation that I’ve consumed).
The Fediverse is currently decentralized not distributed, and it should most definitely stay that way, for the sake of my disk space.
Torrents are both decentralised and distributed.
When you start a torrent, you don’t define a 100%, you define only your torrent and nothing else.
To follow your example, if you run your own torrent instance and the network goes down, then of all torrents out there you will have whatever your instance managed to download. It works the exact same way in this regard.
The main issue with decentralised P2P systems is that they’re very slow when user count is low.
I think it depends a lot on the federated service.
For mastodon, you follow individual users, so if there’s a million users or ten million or a hundred million, their instances will only be contacting other intances they’re federating with so it’s quite scalable.
For Lemmy, you follow communities, so every server pulls all the posts and comments the common community. This means that for an instance like lemmy.world hosting lots of different big communities, every new server hammers the one central instance.
A strategy for improving the situation I think would be to spread the load. Instead of everyone piling into megacommunities, if people spread out into smaller more tight knit communities over many different instances. Of course, this isn’t really compatible with the purpose of having communities like that.
It does seem to suggest that ActivityPub isn’t necessarily the most appropriate protocol for this purpose, even though it’s what was used because it’s the de facto standard on the fediverse.
A big issue with Lemmy right now is how picture storage works. All photos are cached as they enter the instance and there isn’t much to do to turn it off. It’s ridiculous, especially for server scaling. The database in of itself is small, it’s really the pictures that are an issue and grow rapidly.
That’s why it’s stated in the Lemmy docs to use an image host instead of uploading directly. Unfortunately, most users don’t do that.
You can also configure pict-rs to run on object storage so that all your users’ images are stored on S3 rather than your local disk.
I was looking at that earlier and grabbing an S3 bucket or setting up MinIO does not appeal to me. I think I’m just burned out from IRL work.
🫂
Now if we had a federated image service that would be used by default to upload images, this would mitigate having the images on the lemmy servers :)
and instead of the fediverse protocol, it could be more like i2p, everyone help caching images, even the apps could implement that
Right now federation traffics only have minimal impacts to Lemmy. They mostly consume network resource (to send out activitypub messages already waiting in the queue), unlike actual user traffics that consume a lot more CPU resources and database access.
When federation traffics finally become large enough to cause issues on popular instances, I think it should be easy enough for the devs to address (e.g. offloading activitypub subscription to relay servers). Actual user traffics are much harder to scale.
Federation traffic killed most servers just about a month ago. The problem is not some type of traffic, the problem is that Lemmy software is very bad.
Single user instances are not detrimental, but they don’t really help either. But if you invite some friends to your instance it helps a bit, so why not? Hosting your own instance has anyway other advantages.
I can only speak on behalf of Matrix Synapse, but enabling Federation and joining the matrix.org destroyed my server with a i5 6600k- it is very expensive to federate and the resource cost per user is very high if you’re not sharing your instance with anyone.
Have you tried conduit? I am joined to rooms with several thousand users each, and I’m not really suffering any slowdowns except during initial sync. Conduit seems to be running happily at half a gig of RAM, and CPU usage is minimal
I haven’t bothered because nobody I wanna talk to is on Matrix, anyway :P
I was running a federated synapse on a much lower spec’d machine than that … and it was fine. I don’t think it’s federation that does it, it’s joining large and active groups.
it’s joining large and active groups.
Yeah, that’s what I said.
Now, if, instead, you have everyone self hosting their own server, you go right back to having everyone sending a ton of requests to small servers, thereby overwhelming them
It seems like this is based on the assumption that each instance hosts the entirety of lemmy, but that is not the case, they only host the communities their users are interested in.
Is this a hypothetical issue or is there evidence of a scalability bottleneck?
deleted by creator
I’ve been self-hosting Pleroma (Mastodon) for a while as a technical challenge and because I thought it would help. But truth is finding content was a nightmare, and apart from that it was taking all the resources in my machine, and also was receiving network requests like crazy.
In comparison, for Lemmy I just joined a big instance and the experience has been much much better.
I think finding content is the key component of OP’s question. If you host an instance that has only your own subscriptions, the content will feel light, but the extra load on other instances will be minimal and at their convenience. If you load your instance with popular communities so that your All feed pops up weird and interesting content, then the extra load on other instances will be much larger than your personal browsing.
But truth is finding content was a nightmare, and apart from that it was taking all the resources in my machine, and also was receiving network requests like crazy.
I’ve been selfhosting lemmy for almost a month now. Finding content works exactly as it works on any other instance, and my VPS is Hetzner’s cheapest ARM server, so I pay less than 4€ per months including daily backups, no performance issues at all.
This comment got me hesitant to host my own instance: !lemmy.chiisana.net/comment/1764
I don’t think there will be enough people selfhosting for it to become an issue.
But for me, it’s the only way. Between defederation, lack of account portability, single or few admins, risk of shutdowns due to lack of funding, and many other issues, no other Lemmy server would be acceptable for me to commit to.