Can people tell whether images, text or audio files were created by humans or by AI? A new study shows that many AI contents are so convincing that we can no longer distinguish them from reality.
The field of artificial intelligence has developed enormously in recent years. With just a few clicks, users can create not only text, but also images, video and audio content.
But is it possible to distinguish this AI-generated content from that created by humans? An online survey with around 3,000 participants from Germany, China and the USA addressed exactly this question.
Can humans identify AI content?
Numerous research institutions worked together on the study. In addition to the CISPA Helmholtz Center for Information Security, the Ruhr University Bochum, the Leibniz University Hannover and the TU Berlin were also involved. The researchers have now presented the results of their study at the 45th IEEE Symposium on Security and Privacy in San Francisco.
Between June 2022 and September 2022, they collected their data in an online survey in Germany, China and the USA. The respondents were randomly assigned to one of three categories: text, image or audio. 50 percent real and 50 percent AI-generated media were played.
The majority of study participants were unable to distinguish AI content from human-made content – regardless of the medium or country of origin. “We have already reached a point where it is difficult, although not yet impossible, for people to recognize whether something is real or AI-generated,” explains Thorsten Holz from the CISPA Helmholtz Center for Information Security.
Artificially generated content can be misused in many different ways. This year, there are important elections coming up, such as the elections to the European Parliament or the presidential elections in the USA.
AI-generated media could “very easily” be used to influence political opinion. “I see this as a major threat to our democracy,” Holz points out.
Results are independent of education or origin
For their study, the researchers not only differentiated between content created by humans and artificial intelligence. They also asked about socio-biographical data, factors such as media literacy, holistic thinking, general trust, cognitive reflection and political orientation.
It was surprising, says Holz, “that there are very few factors that can explain whether people can better recognize AI-generated media or not.”
Even across different age groups and factors such as educational background, political attitudes or media literacy, the differences are not very significant.
This could also be a problem for online security. According to Lea Schönherr from the CISPA Helmholtz Center for Information Security, this could create “the next generation of phishing emails.” These could then be personalized, with text that is a perfect fit for the recipient.
It is therefore important to continue researching how people can recognize AI-generated content. Automated fact-checking processes are also conceivable here, says Schönherr.
Also interesting:
Source: https://www.basicthinking.de/blog/2024/06/03/studie-menschen-ki-inhalte-erkennen/