Researchers repeatedly warn against artificial intelligence becoming autonomous. But large AI language models cannot develop a life of their own. This is the result of a recent study.

Developments in the field of artificial intelligence have gained enormous momentum, especially since the release of ChatGPT. But the boom also regularly attracts critics who warn that AI systems could eventually take on a life of their own.

However, this is not possible, as a new study by the Technical University of Darmstadt and the University of Bath shows. According to the study, ChatGPT and Co. are not capable of independent and complex thinking.

AI language models cannot develop a life of their own

For their study, the researchers experimented with 20 AI language models. The four model families GPT, LLama, T5 and Falcon 2 were examined.

The focus was on the so-called emergent capabilities of AI models – that is, unforeseen and sudden leaps in the performance of language models.

However, the researchers came to the conclusion that the so-called Large Language Models (LLMs) do not tend to develop general “intelligent” behavior. Therefore, they are not able to proceed in a planned or intuitive manner or even think complexly.

Emergent skills in focus

After the introduction of language models, researchers initially noticed that the larger they became, the more powerful they became. This was due, among other things, to the amount of data they were trained with.

The more data was available for training, the greater the number of language-based tasks the models could solve. Researchers therefore hoped that the more data was used in their training, the better the models would become.

However, critics also warned of the risks posed by the resulting capabilities. For example, it was assumed that the AI ​​language models could take on a life of their own and thus escape human control.

However, according to the research results, there is no evidence for this. It is therefore unlikely that AI language models have differentiated thinking skills.

Instead, the LLMs acquired the superficial ability to follow relatively simple instructions, as the researchers showed. The systems are still a long way from what humans can do.

“However, our results do not mean that AI poses no threat at all,” explains study author Iryna Gurevych from TU Darmstadt. “Rather, we show that the alleged emergence of complex thinking skills associated with certain threats is not supported by evidence and that we can indeed control the learning process of LLMs.”

Gurevych therefore recommends that subsequent research projects focus on the risks of using AI. AI language models, for example, have great potential to be used to generate fake news.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/08/14/ki-sprachmodelle-kein-eigenleben/

Leave a Reply

Your email address will not be published. Required fields are marked *