More and more deepfakes are circulating on the Internet. They pose a danger because they look deceptively real. A new study now provides a method to distinguish deepfakes from real images.

Deepfakes are photos, videos or voice recordings that appear deceptively real but are artificially created or altered. This is the definition of the German federal government.

The problem: Cybercriminals use deepfakes for phishing, disinformation and manipulation of public opinion. Advanced artificial intelligence has made it very difficult to expose deepfakes.

Study explains how deepfakes differ from real recordings

A study by the Royal Astronomical Society shows that there is one detail in many deepfakes that makes them distinguishable from real photos: the reflection of light in the eye. This allows AI-generated fakes to be detected by analyzing the human eye.

In an official press release, the astronomers show an image as an example: On the left is a real image of the actress Scarlett Johansson. On the right is a photo of a person created by an artificial intelligence.

Below the image are close-ups of the eyeballs of both people. There you can see that the reflections are consistent in the real person and physically incorrect in the artificial person.

This means that if the reflections in the eyeballs match, the image is probably that of a real person. If they are different, it is probably a deepfake.

Why do astronomers study artificial intelligence?

The study is the result of research by Adejumoke Owolabi, a master's student at the University of Hull in Yorkshire, England. She analyzed AI-generated fakes in the same way that astronomers examine images of galaxies.

“To measure the shape of galaxies, we analyze whether they are centrally compact, whether they are symmetrical and how smooth they are. We analyze the light distribution,” explains Kevin Pimbblet, professor of astrophysics and director of the Center of Excellence for Data Science, Artificial Intelligence and Modeling. “We detect the reflections in an automatic way and run their morphological features through the CAS and Gini indices.”

Typically, researchers use the Gini coefficient to measure how light is distributed across pixels in an image of a galaxy. They do this by arranging them in ascending order of their luminous flux and then comparing them with what would be expected if the luminous flux were distributed perfectly evenly. The left and right eyeballs of humans can be compared in the same way.

The scientific method used by the team from the University of Hull is not a panacea for detecting fake images, says Kevin Pimbblet. There are also false positives and false negatives. However, the approach provides “a plan of attack for the arms race in detecting fakes.”

Also interesting:

Source: https://www.basicthinking.de/blog/2024/07/25/deepfakes-erkennen/

Leave a Reply