Artificial intelligence (AI) apparently cannot develop a general understanding of the real world. A current study from the USA shows that even large language models crash when the rules in a situation change.

She can write poems and computer programs, evaluate huge amounts of data and even drive a car: artificial intelligence is now showing impressive capabilities and is being used in various areas.

This can give the impression that generative AI is also capable of learning general truths about the world. However, as a recent study by the Massachusetts Institute of Technology (MIT) shows, this is not the case.

AI has no meaningful understanding of the real world

To investigate this issue, MIT, Harvard University and Cornell University teamed up with a popular generative AI model to create turn-by-turn instructions in New York City. The system produced results with near-perfect accuracy without having an internal map of the city.

The problem: When the group closed some roads and added detours for the study, the model's performance plummeted. Closer inspection revealed that the AI ​​was generating non-existent roads that curved between the grid, creating distant connections between distant intersections.

The study is about a generative AI model: the so-called transformer. It is considered the backbone of LLMs like GPT-4. Transformers are trained on a massive amount of language-based data to predict the next token in a sequence – for example, the next word in a sentence.

Understanding the world is important for future AI systems

The results show that transformers can perform surprisingly well on certain tasks without understanding the rules. However, if AI systems are to be developed in the future that can capture accurate world models, the research approach must be different.

If an AI breaks down when the task or the environment changes, it could have serious consequences for generative AI models used in the real world.

“The question of whether LLMs learn coherent models of the world is very important if we want to use these techniques to make new discoveries,” explains Ashesh Rambachan, assistant professor of economics and principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Researchers want to change evaluation standards

The MIT team therefore wants to tackle a larger number of problems in which some rules are only partially known. They also want to apply their evaluation standards to real, scientific problems.

“Often we see these models doing impressive things and think that they must have some understanding of the world. I hope we can convince people that this question needs to be thought about very carefully and that we don't have to rely on our own intuitions to answer it,” Rambachan said.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/11/13/ki-verstaendnis-reale-welt/

Leave a Reply