The EU relies on AI-based systems to carry out border controls and monitor its external borders. However, the AlgorithmWatch organization highlights numerous problems with the technologies used. Among them: lack of transparency and potential human rights violations.

To monitor its external borders and carry out border controls, the European Union is increasingly using automated systems based on AI. According to the organization AlgorithmWatch, ethical considerations should hardly play a role – despite potential human rights violations.

This emerges from a specially created database by the NGO, which lists many of the technologies used and shows the problems associated with them. Although this is public research, the EU would also withhold project information. AlgorithmWatch therefore called for political consequences.

Border controls with AI endanger human rights

Under the project name “Automation on the MoveThe organizations examined 24 research projects commissioned by the EU and assessed them for potential risks. These include: systems for controlling unmanned vehicles and drones, for biometric data processing and other AI-based surveillance models.

AlgorithmWatch consulted scientists, journalists and civil rights activists to uncover the risks of these systems. According to the investigation, technical errors could lead to erroneous identifications, which pose the risk of unwarranted surveillance of people.

Against the background of increasingly strict migration policy, discrimination through AI-based algorithms would increasingly become a problem. According to AlgorithmWatch, the AI ​​systems used could be misused or restrict people's freedom of movement.

This development would endanger fundamental human rights. These include: the right to privacy, the right to equal treatment and the right to asylum. These risks are not sufficiently mentioned in EU research projects.

Little transparency and hardly any ethical consideration

AlgorithmWatch also criticizes a lack of transparency – even though the projects are publicly financed. The European Research Executive Agency (REA) repeatedly denied the organization access to information. Reason: Provider and security interests are more important than the interests of the public.

In order to obtain information, those involved in the project analyzed television recordings and interviews, among other things. One of the results: Among other things, the controversial monitoring technology ANDROMEDA is already being actively used.

AlgorithmWatch also doubts that the system will be limited to border controls. The NGO fears that many technologies could be used militarily. There would be a risk that the systems would fall into the hands of autocratic countries – especially since Belarus was involved in at least two projects until Russia's war of aggression on Ukraine.

Border controls with AI: AI Act offers scope

With the so-called AI Act, the EU wants to restrict the use of ethically questionable AI models. According to AlgorithmWatch, however, there are major gaps, especially in the areas of border protection and migration. Since the EU member states have a certain amount of room for maneuver, the organization makes concrete demands.

Clear supervisory and transparency guidelines should therefore be considered standard for high-risk applications. The involvement of civil society, those affected and experts is also essential for the design and evaluation of AI systems.

The influence of the defense industry must be reduced. Military and civilian systems should be strictly separated – especially when it comes to the transparency of research results. Fabio Chiusi, head of the “Automation on the Move” project at AlgorithmWatch:

Whenever you look at automated technology as a solution to a social problem or phenomenon as old as humanity, such as migration, you will end up justifying discrimination, racism and harm.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/12/09/grenzkontrollen-mit-ki-diskriminierung-rassismus-und-schaden-rechtfertigen/

Leave a Reply