The fediverse is discussing if we should defederate from Meta’s new Threads app. Here’s why I probably won’t (for now).

(Federation between plume and my lemmy instance doesn’t work correctly at the moment, otherwise I would have made this a proper crosspost)

  • dbilitated@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    4
    ·
    1 year ago

    defederating means that people who want to connect with someone on the platform are forced to install it. fuck that. not defederating gives people an alternative and shows them using the fediverse means they don’t miss out on anything regardless of platform.

    if I want to access threads content and I can do it using my existing fed account without installing their app and giving them access to my heartrate, microphone and bowel moton stats then frankly that’s a win for us.

    • Adanisi@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Maybe a separate fediverse instance defederated from the rest of them, and the rest of them defederated from Facebook, would be a better way to go about it, if we really must connect to them. Cut them off from the main fediverse, but still interact outside their platform.

    • TheKingBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Here’s my problem/concern have you read their privacy policy? I want no part of that, would being federated with them mean that they get to siphon up all of my data too? If so I don’t think the defederating goes far enough…

      • dfyx@lemmy.helios42.deOP
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        7
        ·
        1 year ago

        They can siphon your data no matter what you do. As I’ve said in other comments, everything on the internet has been crawled and scraped for literal decades. This post is already indexed by a bunch of different search engines and most likely by some other scrapers that harvest our data for AI or ad profiles. And you can do nothing about it without hurting your legitimate audience. Nothing at all. There’s robots.txt as a mechanism to tell a crawler what it should or shouldn’t index but that’s just asking nicely (mostly to prevent search engines from indexing pages that don’t contain actual content). You could in theory block certain IP ranges or user agents but those change faster than you can identify them. This dilemma is the whole reason why Twitter implemented rate limiting. They wanted to protect their stuff from scrapers. See where it got them.

        Most important rule of the internet: if you don’t want something archived forever, don’t post it!