• pezmaker @sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    For a moment I thought this was “unpopularopinion”. This is an awful idea and you’re entitled to nothing. They’re not a government entity, your usage is at their discretion.

    • Lvxferre@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      1 year ago

      you’re entitled to nothing

      I agree on legal grounds and disagree on moral grounds.

      People are entitled to be treated in a transparent and fair way by other people and entities run by other people. And the way that shadowbanning works in Reddit is neither transparent (it’s shadowbanning) nor fair (too prone to false positives).

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Shadow bans work great against bots and spammers, and adding a requirement for saying exactly why someone was banned will just lead to stuff like “you’re banned for violating the rules”. If you add too much overhead and enforce specific reasoning, we’ll just end up with more bots/spam sticking around.

    Reddit’s moderation sucks (especially the automated stuff) and needs reworking. I’ve also been hit by false positives and it sucks.

    I just don’t think “no shadowbans” and “give reasoning” will fix things

  • slazer2au@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I feel that a platform that bans you should by law have to inform you you’ve been banned AND notify you the exact reason for the ban.

    Good luck with that. Between politicians who don’t know how to use computers to tech companies who don’t know why their automated systems do things, how will the company comply?

  • Lvxferre@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Shadowbanning is a good albeit misused resource. Ideally it should be only used against automated systems, such as spammers or karma farming bots. And it certainly should not be made illegal (also, remember that “unethical” isn’t necessarily “should be illegal”).

    The problem that you’re noticing is elsewhere: Reddit doesn’t really care about its users or fairness, so it sees no problem on using automated systems to ban users, without manual review. (It’s basically “you’re a user so we assume that you’re shit anyway”.) That is bound to create false positives, and if the system handles shadowbanning it’ll do it towards genuine users too, not just bots.