According to UNESCO, AI language models are increasingly spreading gender stereotypes. Many algorithms would assign certain roles to both men and women.

AI language models describe women in domestic roles more often than men. They are also more often associated with words such as “home,” “family,” and “children.” Men, on the other hand, are assigned terms such as “business,” “salary,” and “career.”

This is the result of a UNESCO study carried out in the run-up to International Women's Day. The organization speaks of “worrying trends”. In addition to gender stereotypes, AI also spreads homophobia and racist stereotypes.

AI spreads gender stereotypes: UNESCO concerned

As part of the study, UNESCO examined the most popular AI language models with regard to the stereotyping of language. These also include: GPT 3.5 and GP -2 from OpenAI as well as Llama 2 from Facebook parent company Meta.

The result: So-called Large Language Models (LLMs) would show a bias against women. UNESCO Director General Audrey Azoulay said:

These new AI applications have the power to subtly influence the perceptions of millions of people, so that even small gender biases in their content can significantly increase real-world inequalities.

Their demand: Governments should develop and enforce appropriate legal frameworks. Private companies would in turn have to continuously monitor the use of AI language models for content distortions.

Artificial intelligence shows homophonic and racist tendencies

According to UNESCO, free language models such as GPT 2 and Llama showed the most obvious gender biases. However, these could also be compensated for and eliminated more quickly than with closed AI models such as Google Gemini and GPT 3 and GPT 4.

Open-source LLMs also tend to assign men to high-status jobs, while often assigning women to roles that are traditionally undervalued or socially stigmatized. According to the study, Meta's AI language model Llama described women in domestic roles four times more often than men.

According to UNESCO, LLMs also tend to produce negative content about homosexuals and certain ethnic groups.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/03/13/unesco-schlaegt-alarm-ki-verbreitet-geschlechterklischees/

Leave a Reply

Your email address will not be published. Required fields are marked *