I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries…

It simply replied that it can’t do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.

I asked it to check the list as it didn’t remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.

It’s really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey

  • Throwdownyourgrandma@lemmynsfw.com
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    That is very interesting. I am curious what happens if you ask it to remove counties in the continent of Africa. Maybe that won’t trigger the same response.

    • Razgriz@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      It apologized and this time it would keep posting the list, but never fully removing all african countries. If it removes one it adds another. And if I insist it ends the conversation.

      Jfc

      • xantoxis@lemmy.one
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        This sounds to me like a confluence of two dysfunctions the LLM has: if you phrase a question as if you are making a racist request it will invoke “ethics”, but even if you don’t phrase it that way, it still doesn’t really understand context or what “Africa” is. This is spicy autocomplete. It is working from somebody else’s list of countries, and it doesn’t understand that what you want has a precise, contextually appropriate definition that you can’t just autocomplete into.

        You can get the second type of error with most prompts if you’re not precise enough with what you’re asking.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          When you say “understand”, are you not realizing that it’s not just reporting what it found in other lists of africa. It’s correlating all the times when someone requested “a list of X not including Y” and how all the Y was not in the resulting list of Xes. And regurgitates some fuzzy average of all the times it saw the word “in” and where country names appeared relative to continent names in sentences that had “in” between them.

          All of these correlations, these intuitable rules about how a word causes other words to arrange around it, IS understanding.

          Understanding is being able to generate true statements about a thing. What else are we doing as well listen and talk to ourselves about a topic but building an understanding by listening and absorbing fire-together-wire-together correlations between phonemes?

          People are so quick to dismiss text prediction as a source of “real” intelligence, but there’s a hell of a lot going on in the statistical relationships between words. Language evolved from clicks and grunts that happened to result in dopamine, and it worked better when the sounds correlated to the environment. People forming the same fire-together-wire-together correlations as other people allowed the sounds to transmit knowledge. Those correlations grew up organically in the brain.

          All I’m saying is that I’m not sure what there is to language other than probability boundaries enforcing certain word sequences and those allowable word sequences being altered by other preceding word sequences. Like, given what’s been said so far, only certain words make sense next.

          Spelling is like that, syntax is like that, grammar is like that, and knowledge of the world is like that.

          You understand the spelling (ie set of allowed words) if you know the next letter must be a T or an N here: “THA_”

          You understand syntax insofar as you know that you need a noun or adjective or adverb next here: “Jane handed me the ____”.

          You understand grammar insofar as you know the next character should be an s or a d here: “Liberty never die_”

          You understand life in a gravity well if you know the next word here is likely to be “floor”: “I held out my hand and let go of the brick. It fell straight to the ____”

          You understand dreaming if you know the next couple words here are likely to be “woke up” here: “I let go of the brick and it sank to the floor slowly, like a leaf. When I looked again it had become a toad, staring at me. That was the last thing I remember before I _____”

          Like, the system didn’t have to read a report of a dream with a toad staring at someone, or in simpler terms where “toad” preceded “woke up”, in order to be able to predict that. It can correlate relationships to other correlations of relationships. That’s all the layers in the neural net.

          All it has to do is somehow encode the correlation between “something that doesn’t make sense happening” with “waking up”. And that “something that doesn’t make sense” is itself a huge evaluation based on correlations of how things work, and the types of things that tend to all be linked by people responding with “wait, that doesn’t make sense”.

          There’s a lot of information about the world in those connections.