In the context of LLMs, “hallucination” refers to a phenomenon where the model generates text that is incorrect, nonsensical, or not real. Bias refers to the bias learnt during training.
Type of Hallucinations
Factual Inaccuracies: The LLM produces a statement that is factually incorrect.
Unsupported Claims: The LLM generates a response that has no basis in the input or context.
Nonsensical Statements: The LLM produces a response that doesn’t make sense or is unrelated to the context.
Improbable Scenarios: The LLM generates a response that describes an implausible or highly unlikely event.
Measure Hallucinations
How to detect hallucination issue?
Measure NER
Check if a new named entity (NE) is present in the text != from the ground truth.