Yes, but arguably it was never very scalable for federated software to store large media. It gets utterly massive quick. Third party image/video hosts that specialize in hosting those things can do a better job. And honestly, that’s the kinda data that is just better suited for centralization. Many people can afford to spin up a server that mostly just stores text and deals with basic interactions. Large images or streaming video gets expensive fast, especially if the site were to ever get even remotely close to reddit levels.
That’d still be exploitable. You could just run 3 of your own instances. Coming up with a system to stop malicious users that can’t be gamed would be tricky.
How would one realize CSAM protection? You’d need actual ML to check for it, and I do not think there are trained models available. And now find someone that wants to train such a model, somehow. Also, running an ML model would be quite expensive in energy and hardware.
Won’t that lead to some horrible hug-of-death type scenarios if a post from a small instance gets popular on a huge one?
Yes, but arguably it was never very scalable for federated software to store large media. It gets utterly massive quick. Third party image/video hosts that specialize in hosting those things can do a better job. And honestly, that’s the kinda data that is just better suited for centralization. Many people can afford to spin up a server that mostly just stores text and deals with basic interactions. Large images or streaming video gets expensive fast, especially if the site were to ever get even remotely close to reddit levels.
If you’re only responsible for caching for your own users, you don’t unduly burden smaller instances.
Maybe a system where the files federate after 3 upvotes from outside the original instance?
That’d still be exploitable. You could just run 3 of your own instances. Coming up with a system to stop malicious users that can’t be gamed would be tricky.
We need more decentralization, a federated image/gif host with CSAM protections
How would one realize CSAM protection? You’d need actual ML to check for it, and I do not think there are trained models available. And now find someone that wants to train such a model, somehow. Also, running an ML model would be quite expensive in energy and hardware.