kciwsnurb@aussie.zonetoTechnology@beehaw.org•Hallucination is Inevitable: An Innate Limitation of Large Language Models (arxiv preprint)
1·
10 months agoCan you provide the source of a few of these completely different LLMs?
add even a small amount of change into an LLM […] radically alter the output
You mean perturbing the parameters of the LLM? That’s hardly surprising IMO. And I’m not sure it’s convincing enough to show independence, unless you have a source for this?
The temperature scale, I think. You divide the logit output by the temperature before feeding it to the softmax function. Larger (resp. smaller) temperature results in a higher (resp. lower) entropy distribution.