There is currently no legally stipulated labeling requirement for texts created by artificial intelligence. But how are you supposed to recognize texts that AI has written? A US authority now wants to remedy this.

Artificial intelligence is already being used in many areas. Whether for the creation of photo and video material or for writing longer texts. However, there is currently no obligation to label content created in this way.

Although it is advisable to label this content accordingly when it is published, it is not required by law. However, it then becomes particularly difficult for users to recognize this content.

However, the National Institute of Standards and Technology now wants to remedy this. The US authority has launched the NIST GenAI initiative, which is intended to create systems specifically for this problem.

NIST GenAI is intended to help AI better recognize texts

With the NIST GenAI initiative, the US agency wants to create systems that make it possible to recognize texts, images and videos generated by AI. These so-called “content authenticity” detection systems should also make it possible to expose “deepfake” videos in the future.

The aim is, among other things, to detect fake and misleading content that is created with the help of artificial intelligence.

In order to achieve this, NIST says it first wants to explore the capabilities and limits of generative AI models. These assessments can then be used to promote information integrity and guide “the safe and responsible use of digital content”.

Current systems are not reliable enough

However, tools currently available that promise exactly this are not yet reliable enough. That's why NIST wants to work with teams from the AI ​​industry and researchers on its initiative.

They are initially invited to submit so-called “generators” of AI-supported content as well as “discriminators” who can recognize this content.

For an initial study, the generators must then be able to create a text with a maximum of 250 words on a specific topic. The discriminators, in turn, have to recognize whether these texts were created by AI or by humans.

Registration for the study is open from May 1st to August 1st. A second one will follow, and NIST expects the results by February 2025.

The number of deepfakes has increased dramatically

It is hardly surprising that NIST now wants to move forward quickly with its initiative. The number of manipulated content on the Internet is constantly increasing.

According to the World Economic Forum, the number of deepfakes published has increased by more than 900 percent since the beginning of the year compared to the same period in 2023.

Also interesting:


Leave a Reply

Your email address will not be published. Required fields are marked *