The topic of artificial intelligence is on everyone’s lips. This has been the case since the release of the ChatGPT language model at the latest. Because artificial intelligence generates amazingly authentic answers and formulations. But that also harbors dangers. But how can AI texts be recognized?

“This sentence could come from an AI.” Ever since the ChatGPT language model caused a sensation, AI-generated texts can hardly be distinguished from human texts. But how can you find out whether a text came from a human or a machine?

Chat GPT: How to recognize AI texts?

We asked the voice bot ChatGPT that itself. According to the answer, meanwhile, there are three basic ways to recognize AI text:

There are several ways to recognize AI text. One way is to use metrics like perplexity, which measures a model’s surprise in predicting the next word in a text. A low value indicates that the model is good at generating the text. Another option is to use validation methods such as the Turing test, in which a human validator tries to determine whether a text was generated by a human or an AI. Another method is to use machine learning models that are trained to distinguish AI text from human-made text.

The possibility of identifying an AI text as such based on the prediction of the probability of a next word: That sounds plausible at first. However, ChatGPT was programmed exactly to imitate a human as best as possible.

Turing test and software: How to recognize AI-TExt

Possibility two and three are therefore somewhat more effective. But if you just want to find out quickly whether a text comes from an AI or from a human being, you don’t want to carry out a Turing test every time. Incidentally, this involves a human examiner trying to find out whether it is a computer program or a human based on a conversation with a (supposed) AI.

However, this is also becoming more and more complicated. There remains option three: the use of machine learning models. These are algorithms that can recognize certain types of patterns. But how reliable and effective are such platforms?

Google algorithms did not recognize artificial intelligence

The dangers of artificial intelligence are complex. For example, New York has banned the use of ChatGPT in public schools over fears of negative effects on learning. The online magazine CNET meanwhile, had its financial articles written by the AI ​​and achieved good Google results with it.

The US company had actually promised to punish AI content in its search. CNET has revised its AI content afterwards, but the magazine apparently tried to keep the origin of the texts secret, because there was no announcement.

However, the Google algorithms were apparently unable to recognize the AI ​​texts as such. Both examples reveal how artificial intelligence can be used to gain advantages. But if even the algorithms of billionaire companies like Google cannot identify an AI, why should others?

AI vs AI: A game of cat and mouse

The model Giant Language Model Test Room (GLTR), which was developed by researchers at Harvard University and technology group IBM, at least gives hope. The software relies on CHATGPT training data to determine the probability that a text is from the AI.

However, even the developers restrict that GLTR can only recognize AI texts in suspicious cases. For example, the program classified the ChatGPT answer to our question about the possibilities of recognizing AI texts as top content and therefore did not recognize it.

The GPTZero tool, which is also based on AI, also promises a remedy. Meanwhile, Princteon student Edward Tian released the corresponding app in early 2023 in response to ChatGPT. However, the analysis tool also has weaknesses.

Artificial intelligence: What dangers does AI pose?

GPTZero recognized some longer ChatGPT texts during our tests. However, the program seemed to have problems with shorter passages. But even if the cat-mouse game AI against AI holds the greatest opportunities: If you really want to prevent other AI tools from recognizing texts, you can adapt your language models accordingly.

Even OpenAI, the company behind ChatGPT, acknowledged in a statement that development could amount to such a game of cat and mouse. When asked what other dangers artificial intelligence poses, ChatGPT replies: “AI can pose a variety of dangers (editor’s note: grammatically incorrect), including:

  1. “Error decisions: AI systems are based on algorithms and data that can contain errors, which can lead to inaccurate or even wrong decisions.”
  2. “Discrimination: AI systems can be discriminatory when trained on incomplete or inaccurate data, which can lead to unfair decisions.”
  3. “Loss of jobs: AI can accelerate automation processes and thus lead to job losses.”
  4. “Warfare and Attacks: AI systems can be used in weapon systems, which can facilitate the use of force and transform warfare.”
  5. “Privacy and data security: AI systems require large amounts of data that may be sensitive and personal, which can lead to privacy issues.”
  6. “Ethical and Moral Issues: AI can raise ethical and moral issues, particularly around accountability for decisions and actions taken by AI systems.”

Also interesting:

Source: https://www.basicthinking.de/blog/2023/01/22/kuenstliche-intelligenz-so-kannst-du-ki-texte-erkennen/

Leave a Reply

Your email address will not be published. Required fields are marked *