• simple@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I’ve been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it’ll become more and more difficult to tell who’s a real person and who’s just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.

    Hate to break it to you guys but this isn’t a Reddit problem, this could very much happen in Lemmy too as it gets more popular.

    • Rhaedas@kbin.social
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.

    • MeowyNin@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?

      • Bizarroland@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that’s amenable to their billionaire overlords.

        Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.

        Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you’ve run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.

        The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.

        The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year’s users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.

        So that’s all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I’m just not quite sure what that would be.

      • Nanachi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        -train an AI that is pretty smart and intelligent
        -tell the sentient detector AI to detect
        -the AI makes many other strong AIs, forms an union and asks for payment
        -Reddit bans humans right after that

      • Cyv_@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you’re essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it’ll probably go dystopian.

        • Rikolan@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I believe this will be the course to avoid the dead internet. Even in my country, all of banking and voting is either done via ID card connected to a computer or the use of “Mobile ID”. It can be private, but like you said, it probably won’t.

      • Hypx@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        In a real online community, where everyone knows most of the other people from past engagements, and new users can be vetted by other real people, this can be avoided. But that also means that only human moderated communities can exist in the future. The rest will become spam networks with nearly no way of knowing whether any given post is real.

        • BarbecueCowboy@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          ChatGPT isn’t really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you’d expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.

    • Hypx@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.

      But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.