The AI ​​chatbot “ChatGPT” generates texts that can hardly be distinguished from human ones. In addition to artificial intelligence, this requires countless training data. But they come from Kenya, among other places. Low-wage workers apparently had to read traumatizing texts there in order to optimize the AI.

ChatGPT is currently causing a sensation. Because artificial intelligence delivers amazing texts and results that can hardly be distinguished from human ones. But this not only requires algorithms, but also countless training data.

The problem with previous AI models: They cannot classify or avoid prejudice, false information and hate and agitation on the Internet. But for ChatGPT to succeed in doing just that, developer OpenAI relies primarily on human input to train the AI.

ChatGPT: The dirty business with cheap wage workers from Kenya

But as laudable as that may sound at first, the reality seems all the less laudable. Because like that Time-Magazine reported, OpenAI has hired data company Sama, also headquartered in San Francisco, to provide training data to ChatGPT.

However, Sama’s employees operate predominantly from countries such as Kenya, Uganda and India. The sticking point: Since November 2021, the company has employed around three dozen low-wage workers from Kenya on behalf of OpenAI. They were paid between $1.32 and $1.5 an hour, depending on seniority and performance.

ChatGPT: Sama employee describes content as “torture”

Sea Times This corresponds roughly to the salary of a receptionist in a hotel in the Kenyan capital Nairobi. However, OpenAI apparently charged Sama’s services at a contractually agreed hourly rate of $12.50 – nine times more.

A Sama spokesman, meanwhile, justified the large difference by saying that the company should have covered all costs with it. Precarious: However, the workers had to deal with texts and content that were sometimes traumatic. Among them: murder, animal cruelty and sexualized violence (also against children).

A Sama employee described the content to the Times-Magazine even as torture. According to the report, the display of some content is also prohibited under US law. The working conditions in Kenya are also bad and would not have corresponded to the US minimum wage – even though OpenAI and Sama are both located in Silicon Valley.

OpenAI does not object – Sama ends cooperation prematurely

OpenAI has not yet contradicted the allegations. On the contrary, the company reportedly said in a statement that such training data is essential:

Classifying and filtering harmful content is a necessary step in minimizing violent and sexual content in training data and in developing tools that can detect it.

Sama has meanwhile terminated the contract with OpenAI prematurely – around eight months before it expired. For its part, the company announced that it had revised its policies and was exiting the content moderation business.

Also interesting:

Source: https://www.basicthinking.de/blog/2023/01/23/kenia-chatgpt-traumatisierende-texte/

Leave a Reply