Please do not give me shit for using Facebook. It’s how I keep in touch with relatives, most of whom live abroad, and my brother, who is has ASD prefers to communicate with me that way.

I would rather not use it, but I would prefer keeping in touch with my brother.

That said, I would not let AI keep in touch with my brother for me.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    9 months ago

    This kind of reminds me of the state of the mobile gaming space with respect to these sort of “idle” games that are out there. I’m not sure if most are like this, but I’ve been experiencing one lately for shits & giggles. I started playing one a few weeks ago that’s almost like a tower defense-ish game, you’ve got waves of enemies coming at you and you need to erect various defenses to stop them, comprised of heroes from various roles.

    The basic gameplay itself is ok-ish, BUT the developers have inserted so many goddamn currencies and roadblocks and things to slow the game down, I guess to make it a fucking grindfest. You’re basically required to grind and level up your heroes in order to advance past some levels, BUT they give you the option to do “Auto-battles” where you just let the game run on auto-pilot. So, in order to get arbitrary amounts of experience to level up my people and to proceed past some gameplay roadblock, you can run through X auto-battles and level up that way, so I just let my phone run this stupid thing for ~10 min just so I can advance. Or you can pay money for shortcuts, that’s their business model I guess.

    Are we eventually going to get to that point with social media? We won’t really be maintaining friendships with people, we’ll just have our AIs maintain relationships with other people’s AI, and we just sort of let it run on auto-pilot while we’re off doing whatever. Then you’ll run into a Facebook friend IRL and have no idea who they are, despite your AI’s being best friends with each other. I’m just wondering how they’ll eventually transform it into the Freemium business model.

    • dumples@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      To be honest the best use case for Gen-AI is people using Gen AI to generate professional messages from simple sentences and the recipient to use Gen-AI to translate it back. I can’t wait for all my interactions to be made for me. It’s going to be terrible

      • paddirn@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        I’d love if it could apply to jobs for me. Just take my resume, figure out the job position I’m applying for and go through their dumb application site, pull all the same data off of my resume and use it to fill in the duplicate fields that they have that are in my resume. Just let the AI handle all the applying for me and it can tell me what jobs I have an interview for so I don’t have to waste all my time with the application part.

        • dumples@kbin.social
          link
          fedilink
          arrow-up
          5
          ·
          9 months ago

          I tried to get LLMs to write a cover letter for me. It could either lie about my credentials to match the job or write a generic one about my credentials. They are so dumb. This should be useful for us workers but won’t get better since it doesn’t help businesses.

          • paddirn@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            If anything, HR depts will use AI to pre-sort candidates based on some algorithm that looks at arbitrary measures, the presence of certain buzz words, and whatever else, anything to cut down on the amount of work HR needs to do screening candidates.

  • MrsDoyle@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    Far-away family are the only reason I use FB too. My sister and some of my nieces use it to a disturbing degree, “checking in” when they’re in restaurants etc, posting “memories”, pictures of their kids. My sister has a special pose for her FB selfies - head tilt, fake smile. I hate it all with a burning fire, even when I’m clicking the heart button on a puppy photo.

    AI just seems like another step closer to the abyss, the death of true creativity.

  • Starkstruck@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    9 months ago

    Soon social media will be entirely automated, requiring no human input!

    But fr wtf is the point of this 💀

  • SpaceBishop@lemmy.zip
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    3
    ·
    9 months ago

    Don’t let trolls bring you down. Some people just are not happy without making someone else sad.

  • 2deck@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 months ago

    Next; we’ll automatically share that you’re interested in products. But noone cares because it’s just stupid language models reading content from other stupid language models. People stop buying things because they’ve detached from social media with posts becoming parodies of themselves and irony doesn’t really lead anywhere new.

  • Axle182@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    You can disable your Facebook account and keep using messenger if you want, that’s what I do

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    9 months ago

    This may ultimately be a good thing for social media given the propensity of SotA models to bias away from fringe misinformation (see Musk’s Grok which infuriated him and his users for being ‘woke’ - i.e. in line with published research).

    As well, to bias away from outrage porn interactions.

    I’ve been dreaming of a social network where you would have AI intermediate all interactions to bias the network towards positivity and less hostility.

    This, while clearly a half-assed effort to shove LLMs anywhere possible for Wall Street, may be a first step in a more positive direction.

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      9 months ago

      I’ve been dreaming of a social network where you would have AI intermediate all interactions to bias the network towards positivity and less hostility.

      I don’t know that is a realistic idea. I don’t know if AI at our current level could accurately discern positivity from hostility well enough. There’s too much emotion in language that I think would require a deep understanding of emotion itself to sort that out properly.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        9 months ago

        It absolutely could.

        With a reference frame constructed from over 500 adults, we tested a variety of mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding 89% of human participants with an EQ of 117.

        We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents.

        A lot of people here have no idea just how far the field actually has come from dicking around with the free ChatGPT and reading pop articles.