Since the release of ChatGPT, the topic of AI has been omnipresent. But the successful concept of artificial intelligence could also be its downfall. At least that is what current research results suggest, which predict a model collapse.
Interest in artificial intelligence has increased significantly with the release of ChatGPT. This is also reflected in Google searches, which have increased massively since the end of 2022. However, current research suggests that the previous basis of AI's success could also be its downfall.
Model collapse: AI learns from AI – and gets worse and worse
Researchers from the Universities of Cambridge and Oxford have investigated what happens when AI tools query content that comes from another AI. The result: According to the study, which was published in the science journal Nature As published in 2018, Artificial Intelligence becomes increasingly worse when it relies exclusively on AI-generated content.
According to the study, the quality of the content decreased as the number of requests increased. After the fifth attempt, an AI model that used data from another AI spat out increasingly poor answers. After the ninth consecutive request, only nonsensical, uniform mush came out.
The researchers call this process “model collapse,” which can occur due to a cyclical overdose of content until the result amounts to a worthless distortion of reality. The results are alarming because more and more AI-generated content is circulating on the Internet. According to a study by researchers at Amazon Web Service, more than 50 percent of all translations on the Internet come from AI models.
If this trend continues and AI training is not fundamentally overhauled, researchers say it is possible that AI will not only worsen itself, but also the entire Internet.
Does artificial intelligence also make the Internet worse?
Another study, also published in the scientific journal Nature came to similar results. According to the study, an AI that was trained on dog breeds excluded more and more lesser-known breeds over time. According to the researchers, the model developed its own “use it or lose it” method.
However, scientists have not yet been able to clearly explain why AI models are increasingly losing touch with reality when they access other AI content. In order to guarantee a certain level of quality and facts, it is therefore important that artificial intelligence can regularly access content that comes from humans.
However, the researchers assume that the proportion of AI content on the Internet will continue to increase in the coming years. What makes this even more difficult is that it is becoming increasingly difficult for people to clearly identify it. According to the researchers, the need for a solution to this problem is therefore enormous.
Everything that happens online would otherwise have to be recorded in an immutable system such as a blockchain database. Otherwise, not only the death of AI and the Internet is threatened, but also the death of truth.
Also interesting:
Source: https://www.basicthinking.de/blog/2024/09/04/modellkollaps-killt-ki-sich-selbst-und-das-internet-gleich-mit/