Is there a requirement for labeling of AI-generated content? In Germany, there is currently no legal obligation to mark such content as “AI-generated”. But that opens up room for discussion. So let's discuss! A column.

AI models such as ChatGPT and Midjourney can now generate images, texts and music that are hardly or indistinguishable from human works. The discussion about the need for labeling requirements for AI-generated content is therefore becoming increasingly important.

Should AI-generated works be labeled as such, or would this be an unnecessary regulation that in practice creates more problems than it solves?

Is there a need for a labeling requirement for AI content?

A frequently put forward argument in favor of a labeling requirement is transparency. Users should be able to tell whether a work was created by a human or a machine.

Many people attach great importance to the creative process behind a text, a song or an image. A work of art created by a machine may be technically impressive, but it lacks the emotional and creative input of a human.

Someone hearing an AI-generated image or song might evaluate such a work differently if they knew it was not created by a human. A labeling requirement could create transparency and help with the evaluation of a work.

Consumer protection

There are also arguments for mandatory labeling in the area of ​​consumer protection. Consumers should have the ability to identify whether they are dealing with an AI-generated or human-created work in order to make an informed decision about consuming or purchasing a product or service when using the work to increase buying interest.

Particularly in the area of ​​advertising, where authenticity plays a particular role, the use of AI-generated models or unlabeled texts could be perceived as misleading. For example, a company that uses AI-generated people in its advertising campaign could give consumers the impression that real people are wearing the advertised clothing or using the service and are therefore interested in these offers.

The arguments just mentioned seem convincing at first glance. But if you look at them more closely, doubts quickly arise about their validity.

No labeling requirement for other digital tools

In general, one could ask why a labeling requirement for AI-generated content should be necessary at all. There are no such obligations for other, similar processes, although numerous technical aids are already used today to change or improve works created by humans.

A photographer who edits an image does not have to disclose what changes were made. Whether a mole has been removed or the exposure adjusted is usually hidden from the viewer and there is no legal basis for marking this.

So why should an image generated entirely by an AI be treated differently than one created by a human and then digitally edited?

The same applies to music production. Many artists use digital tools and software to refine or modify their works. A piece of music that was originally recorded on real instruments can be subsequently changed using software or supplemented with electronic instruments. Here, too, there is no obligation to label the digital tools used.

Ultimately, one could even argue that the origin of an image, a text or a piece of music – whether human or machine – plays no role in a user's experience with the work, as long as the end result is convincing.

Common sense

Another argument against mandatory labeling concerns the aesthetic and artistic aspect of AI-generated images. Many images, such as those of Dall-E, are – at least currently – easily recognizable as artificial. The question arises as to whether labeling is even necessary here, since the artificial nature of the image is obvious anyway.

A user who looks at such an image will quickly realize that it is not a work of man. So why should there be a requirement for labeling if the artificial origin of the work is already clearly evident?

Another consideration concerns AI-generated images that create deceptively real-looking subjects such as fruits, nature or people. Such images are increasingly being used in advertising or for marketing purposes. The question here is whether a labeling requirement is really necessary if the actual aim of the image, for example the representation of products such as clothing, is in the foreground.

The consumer ends up focusing on the product being advertised and not the background or the nature shown or the people involved. In this context, does it really matter whether, for example, a model is real or artificial?

From the perspective of many advertisers and users, this will not matter as long as the result is aesthetically pleasing and highlights the product.

Internationale Dimension

Finally, the question arises as to which geographical area a labeling requirement must be considered. Like many other works, AI-generated content is usually available globally. The question therefore arises as to whether a regulation in Germany, for example, would actually make sense in times of globalization and free data flow on the Internet.

Even if Germany were to introduce such a labeling requirement, the question would remain as to how AI-generated content from other countries would be dealt with.

By this I don't just mean the question of whether images that were created using a generator in the USA, for example, can be used in Germany (that would be the question of “if”). Rather, I'm thinking about the confusion that could be created if, for example, there were no labeling requirement in the USA, but there was in Germany.

Then, when a user visits a US website that shows an image, they would always have to keep in mind that this image could possibly be from an AI. But will he think about this issue of different labeling?

Conclusion: Labeling requirement for AI content

The discussion about the labeling requirement for AI-generated content concerns a complex topic. On the one hand, there are good arguments that users should be able to clearly see whether a work comes from a human or a machine. This particularly concerns aspects of authenticity and trust, which are particularly important in areas such as art or advertising.

On the other hand, there are good arguments against such a labeling requirement. Much content that is created today – be it in music production or in the context of image editing – is not subject to any labeling requirement, even though it is digitally edited and therefore differs – sometimes significantly – from the original work
In addition, the practical implementation of such a labeling requirement would be difficult and could create unnecessary bureaucracy, which may ultimately cause more confusion than clarity.

Perhaps, however, the call for mandatory labeling is ultimately simply based on our uncertainty when dealing with a new technology. AI-generated works are something completely new that we as humans first have to understand and classify.

Instead of taking the time to find out how we (can) meaningfully deal with such works, with this new form of creativity, we may be too hasty in asking for regulation. The question that arises for me is whether we actually need a labeling requirement – or whether we should first get used to this new reality and how to deal with it.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/10/18/kennzeichnungspflicht-fuer-ki-inhalte-ein-voreiliger-schrei-nach-regulierung/

Leave a Reply

Your email address will not be published. Required fields are marked *