Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • soiling@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    “hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.

      • Mirodir@lemmy.fmhy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?

        Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.

          • newde@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            You can make an educated guess if you would understand the intricacies of the programming. In this case, it’s most likely blurting out words and phrases that statistically most adequately fit the (perhaps somewhat leading) questions.

    • variaatio@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.

      LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.

      This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.