A friend of mine is interested in the “sovereign artist” model, which basically means that you self publish and self release your own work on your own website, as opposed to using a publishing house or art gallery.

It’s powerful because it gives everyone a platform to share “niche” art, but as a consumer, it can be difficult to find and “curate” high quality, interesting works of art. Is there a rating/voting system that exists that is resitant to internet vote tampering?

I’m talking about how 10 years ago, Amazon reviews were pretty helpful. But now they’ve been swarmed with paid and bot written reviews. Same with Slickdeals and many others.

I’d want a voting system that incorporates some ideas:

  • it would prevent one person from making multiple fake accounts
  • reviews wouldn’t be suppressed or promoted by paid algorithims
  • the algorithm WOULD help connect people to items they are interested in. But maybe the workings of it would be open source, so it can be audited for bad acting.

Does a project like this exist somewhere? Rather than host that project in one place, it could be powerful to defederate and prevent the temptation to manipulate algorithms.

    • adr1an@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Again, this is not what you asked but I prefer looking at reviews by YouTubers that I know (e.g. Linus Tech Tips). Maybe a ranking system among those in the review biz would not be so prone to bots.

  • Blake (he/him) @beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Is there a rating/voting system that exists that is resitant to internet vote tampering?

    No, there is not. If there was, it would be well-known and widely used.

    Even the U.S. government abandoned e-voting attempts due to strong opposition from cyber security experts. I learned about this on the Reveal podcast

  • azdle@news.idlestate.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    As far as I’m aware something like that isn’t really possible.

    • it would prevent one person from making multiple fake accounts

    How do you define ‘a person’ and how do you ensure that they only have one account? Short of government control of accounts, I don’t think you can really guarantee this and even then there’s still fraud that gets past the current government systems.

    Then, how do you verify that the review is coming from the person that the account is for?

    IMO, we’d all be better off going back to smaller scale social interactions, think ‘social media towns’ you trust a smaller number of people and over time develop trust in some. Then you can scale this out to more people than you can directly know with some sort of web-of-trust model. You know you trust Alice, and you know Alice trusts Bob, so therefore you can trust Bob, but not necessarily quite as much as you trust Alice. Then you have this web of trust relationships that decay a bit with each hop away from you.

    It’s a rather thorny problem to solve especially since for that to work optimally you’d want to know how much Alice trusts bob, but that amounts to everyone documenting how much they trust each of their friends, which seems socially… well… difficult.

    Though the rest is actually easy™:

    • reviews wouldn’t be suppressed or promoted by paid algorithms
    • the algorithm WOULD help connect people to items they are interested in. But maybe the workings of it would be open source, so it can be audited for bad acting.

    You do what the fediverse does, you have all the information available to everyone, then you run your own ‘algorithm’ that you wrote/audited/trust. The hard part is getting others to give away access to all ‘their’ data.

  • off_brand_@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    The problem breaks down into a few broad sub problems, as I see it.

    1. Confirming the reviewer or voter is who they say they are (to prevent one entity from making multiple reviews).
    2. Confirming the reviewer or voter is a valid stakeholder. This is domain-specific, but can be such metrics as “citizen of country”, or “verified purchaser”.
    3. Confirming the intent of the reviewer. This meaning people who were paid off (buyers who are offered a gift card for a positive review, which happens plenty on Amazon), or discounting review bombs when a game “goes woke”.

    1 and 2 have solutions. Steam cares about whether you’re a verified purchaser, and the barrier to entry of “1 purchase of a game per vote” is certainly enough to make things harder to bot. Amazon might be able to do the same, but so much of the transaction happens outside their purview that a foolproof system would be hard. Not that it’s in their interest to do so, though.

    For places like Reddit or Lemmy, verifying one human per up vote is going to be impossible. New accounts are cheap and easy as a core function of the product. bot detection is only going to get harder, too.

    If you used some centralized certificate system (like SSL certs), you could maybe get as granular as one vote per machine, but not without massive privacy invasions. The government does this for voting kinda, but we make a point to keep those private identifiers the government gives private.