The AI ​​model “AI Scientist” changed its own code during an experiment to bypass time limits and start itself. Researchers are therefore warning about the risks of autonomous AI.

The Japanese company Sakana AI has developed an AI model that recently surprisingly changed its own code during an experiment. The result: the runtime of the artificial intelligence was extended and it was able to start itself.

The AI, called “AI Scientist,” was previously given the task of conducting scientific research autonomously. However, it encountered time constraints that the researchers defined in advance.

Instead of speeding up the processes, the system came up with another idea. The artificial intelligence simply changed its own code to bypass the predefined time limits. To do this, “AI Scientist” restarted itself and thus received “fresh” running time.

AI model “AI Scientist” changes its own code

This behavior highlights the risks associated with autonomous AI systems, especially when they operate in uncontrolled environments. Although the researchers conducted their experiment in a safe, isolated environment, it underscores the importance of strict safety precautions. One such precaution is so-called sandboxing to prevent unwanted effects.

Sakana AI therefore recommends that AI systems should only be operated in highly restricted and monitored environments to avoid potential harm. As a result, the experiments generated both interest and concern. Critics, including members of the Hacker News community, expressed doubts about such systems.

Are AI models useful in research?

Among them is the question of whether current AI models are truly capable of making scientific discoveries at a level equivalent to that of human researchers. Some experts fear that the proliferation of such systems could lead to a flood of low-quality scientific papers, which would overburden the scientific community and reduce the quality of research.

The discussion shows that those responsible should carefully monitor and regulate the use of AI in science. Ultimately, it should be ensured at all times that the technology makes a positive contribution rather than endangering scientific integrity.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/08/21/ki-modell-ai-scientist/

Leave a Reply

Your email address will not be published. Required fields are marked *