Artificial intelligence can not only be helpful. As a study by researchers at MIT has now found, AI is quite capable of deceiving and cheating people.

With all the hype surrounding artificial intelligence, more and more researchers are also looking at the dark side of this technology. Psychology professor Joe Árvai from the University of Southern California, for example, is concerned about people's ability to make well-considered decisions.

But critics also focus on problems such as discrimination in schools or reduced well-being among employees. But one of the biggest difficulties in using AI is the issue of deception.

Researchers at the Massachusetts Institute of Technology (MIT) have now conducted a study to determine the extent to which AI systems can deceive and cheat humans.

Study shows: AI can deceive people

Many AI systems are actually designed to help people and take certain tasks off their hands. The developers sometimes attach great importance to the fact that the systems work honestly and free of discrimination.

However, the reality here is quite different, as researchers at MIT explain in a publication in the journal Patterns. According to them, even the AI ​​systems that are actually helpful are capable of deceiving and cheating people.

“Large language models and other AI systems have already learned through training the ability to deceive through techniques such as manipulation, crawling and cheating in security testing,” the publication states.

This poses serious risks. The researchers warn of fraud and election manipulation, but also long-term consequences such as loss of control over the AI ​​systems.

Development needs stricter regulations

For this reason, the researchers led by Peter S. Park are calling for, among other things, stricter laws that require transparency about AI interactions. But regulatory frameworks for assessing AI deception risks are also necessary.

In addition, the field of AI deception needs to be researched more intensively so that it can be better detected and prevented in the future. A proactive approach is necessary here “to ensure that AI functions as a useful technology that complements human knowledge, discourse and institutions rather than destabilizing them.”

AI: GPT-4 and Cicero also deceive humans

Even language models such as GPT-4 are not exempt from this, as the researchers' study shows. In an experiment conducted by developers at OpenAI, the AI ​​system at TaskRabbit pretended to be a visually impaired person who was unable to solve a captcha on his own. The language model then got a human to take on this task for him.

However, according to the study, the manipulation is even more pronounced in Meta's AI system Cicero. In a strategy game that is actually about working together with other players, the AI ​​often behaved unfairly, according to MIT researchers.

We found that Meta's AI had learned to be a master of deception, but Meta had failed to train its AI to win honestly.

This behavior caused Cicero to win more often than average in the game tested. The AI ​​system was among the top ten percent of players.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/06/06/ki-menschen-taeuschen/

Leave a Reply

Your email address will not be published. Required fields are marked *