Wrong answers are not necessarily surprising with AI systems. But a new study now shows that these so-called hallucinations occur even though the AI systems know the right answers.
Hallucinations are a well-known problem for AI systems. Experts use this to describe incorrect answers that are, however, formulated in an absolutely convincing way. The difficulty of the question does not matter either, as hallucinations of AI systems can also occur with sometimes very simple questions.
But a new study now shows that these incorrect answers are not because the respective AI system does not know the answer. Because the AI system usually knows the correct answer.
Why do AI systems give wrong answers?
Researchers from Technion, the Technological Institute for Israel, have studied hallucinations of AI systems. To do this, they took a closer look at how AI systems work. Google and Apple were also involved in the study.
The study is titled “LLMs know more than they show” and looks at the “intrinsic representation of LLM hallucinations.” According to the researchers, these hallucinations of AI systems include, among other things, “factual inaccuracies, distortions and errors in thinking.”
During the investigation, the researchers noticed “a discrepancy between the internal coding and the external behavior” of large language models. This means that the systems encode the correct answer, but externally produce an incorrect answer.
Response tokens contain the correct information
How The Decoder reports, the scientists have developed a new method for their investigation. The aim was to be able to better examine the “inner workings” of AI systems.
Their focus was on the so-called “exact response tokens”. This means the part of an answer that also contains the actual information.
Large language models are trained not only to say the actual answer, but also to answer in the entire sentence. If you were asked about the capital of Germany, the word “Berlin” would be the exact answer token in a complete sentence.
And according to the researchers, it is precisely these tokens that contain the information about whether an answer is right or wrong. The surprising result of the study emerged: the AI systems often “knew” the right answer, but did not give it.
They can encode the correct answer but consistently produce an incorrect answer.
With their research results, the scientists were able to deepen our understanding of errors in AI systems. This knowledge could now be used to significantly improve error analysis and avoidance.
Also interesting:
Source: https://www.basicthinking.de/blog/2024/10/18/halluzinationen-ki-falsche-antworten/