“hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.
Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?
Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.
You can make an educated guess if you would understand the intricacies of the programming. In this case, it’s most likely blurting out words and phrases that statistically most adequately fit the (perhaps somewhat leading) questions.
Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.
LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.
This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.
“hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.
The evolution is fast. We have AI with a theory of mind:
https://www.discovermagazine.com/mind/ai-chatbot-spontaneously-develops-a-theory-of-mind
Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?
Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.
Good point. How will we be able to tell the difference?
You can make an educated guess if you would understand the intricacies of the programming. In this case, it’s most likely blurting out words and phrases that statistically most adequately fit the (perhaps somewhat leading) questions.
Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.
LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.
This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.