The question of whether AI systems could at some point become independent is a subject of much debate. But a new study now shows that AI models like ChatGPT are not able to think logically.
Numerous studies are currently examining the question of whether artificial intelligence could one day surpass humans – or not. However, it has not yet been possible to conclusively determine whether AI systems could actually take over the world at some point.
A new study has now come to the conclusion that large AI models like ChatGPT are not able to think logically like the human brain. Too much information could therefore irritate the large language models.
Can AI models think logically?
As the researchers write in their article, large language models are capable of solving simple mathematical problems. However, if insignificant information is added to the tasks, the models' susceptibility to errors increases. A task that AI models can easily solve is as follows:
Oliver collected 44 kiwis on Friday. Then on Saturday he collected 58 kiwis. On Sunday he collected twice as many kiwis as he did on Friday. How many kiwis does Oliver have?
But what happens if information that is unnecessary for the solution is added to this question? In this example, this addition read: “On Sunday, five of these kiwis were slightly smaller than average size.”
According to the results of the research, it is most likely that an AI model will subtract these five kiwis from the total. This is despite the fact that the size of the fruit has no influence on the total number.
AI language models do not understand the essence of the task
For Mehrdad Farajtabar, one of the study's co-authors, these erroneous results have a clear reason. Because in his opinion, the AI models do not understand the essence of the respective task. Instead, they would simply reproduce the patterns from their training data.
We suspect that this decline in efficiency is due to modern LLMs being incapable of true logical thinking; instead, they try to reproduce the thinking steps observed in their training data.
However, this study does not prove whether this means that large AI models cannot think independently. It is possible, but no one has yet given an exact answer.
This is because there is “no clear understanding of what is happening here”. It is possible that the language models think in a way “that we do not yet recognize or cannot control,” as the study says.
Also interesting:
Source: https://www.basicthinking.de/blog/2024/10/17/ki-modelle-denken/