Artificial intelligence (AI) is playing an increasingly important role in companies and organizations. But as AI technologies become more widespread, the regulatory requirements also grow. The EU AI Regulation (EU AI Act) is one of the first comprehensive legislations in the world to regulate the use of AI. A central element of this regulation is the AI competence requirement. But what does this actually mean for companies and CIOs?
Table of contents
The EU AI Regulation: An Overview
The EU AI Regulation aims to ensure the safe and ethical use of AI systems. It classifies AI applications into four categories according to their risk:
- Minimal risk: for example, AI-supported spam filters, automatic spelling corrections
- Low risk: for example chatbots, AI-powered content recommendation systems
- High risk: for example, AI in personnel recruiting or lending, AI-supported medical diagnoses, autonomous vehicles, critical infrastructure management
- Unacceptable risk: for example, social scoring systems as well as certain forms of biometric surveillance and emotional AI in workplace control
High-risk AI applications are subject to strict regulations regarding transparency, data quality, human monitoring and security. Companies using such systems must also carry out comprehensive risk assessments and carry out regular compliance checks. However, the legal measures can be implemented sensibly and can also have a positive effect on software quality. In this case, legal compliance is not just an obstacle, but a good reason to improve quality management
AI competency requirement: what is it?
A central element of the EU AI regulation is the so-called AI competency requirement. It obliges companies to ensure that all employees involved in the development, use or monitoring of AI systems have the necessary skills and knowledge. This not only affects technical specialists in IT departments, but also professionals in areas such as compliance, risk management, product development or HR – in short: everyone who works with AI systems or uses their results. The competency requirement applies regardless of the risk class of the AI application – i.e. even to systems with only limited risk.
The core of this requirement is a sound technical understanding: Employees should be able to understand the basic functionality of AI systems – such as machine learning, decision-making using algorithms or the handling of training and real-time data. The aim is not just to impart superficial user knowledge, but to build real technical understanding that enables a well-founded assessment.
There is also the need to know the regulatory framework and take it into account in everyday work. These include, among other things, requirements for transparency, traceability, data protection, fairness and accountability. Such knowledge is essential, especially in regulated industries such as finance or healthcare, in order to implement AI projects in a legally compliant manner.
Raising awareness of ethical issues is just as important: Anyone who works with AI systems must be aware of possible distortions, discrimination effects or a lack of transparency. Employees should learn to recognize these risks, question them critically and actively take countermeasures – be it in data preparation, in the design of models or in the validation of results.
Another aspect of the competency requirement is the ability to assess the reliability and security of AI systems. This includes identifying risks at an early stage, testing the robustness of systems under real conditions and systematically documenting possible sources of error. This is the only way to develop trustworthy applications that are sustainable in the company in the long term.
Overall, the EU AI regulation pursues a clear goal with the competence requirement: the use of artificial intelligence should not only be innovative, but above all responsible, comprehensible and human-centered. Companies are therefore obliged to invest not only in technology, but also in know-how and education – and to put people at the center of their AI strategy.
Prohibited AI practices according to Article 5 AI-VO
From February 2, 2025, certain AI practices will be expressly prohibited within the EU. This includes:
- Manipulative or deceptive AI technologieswhich unconsciously lead people to make harmful decisions.
- Social Scoring-Systemewhich evaluate the behavior of individuals and entail discriminatory consequences.
- Real-time biometric surveillance in public spacesunless it serves specific legal purposes such as combating terrorism.
- Emotional AI for workplace monitoring or in schoolswhich draws conclusions about the emotional state based on facial expressions or body language.
- Autonomous systems that can physically or psychologically harm peoplesuch as fully automated weapon systems.
Phased implementation of the AI regulation
The EU AI regulation will be implemented in several phases:
- Phase 1 (2024 – 2025): Adoption of the regulation and introduction of basic transparency obligations for providers of AI systems.
- Phase 2 (2026): Applying the rules for high-risk AI, including data quality and human supervision requirements.
- Phase 3 (2027): Full implementation of the regulation with mandatory penalties for non-compliance.
Companies should therefore start adapting their internal processes and training measures accordingly.
Impact on companies
The EU AI regulation sees no general training requirementbut rather an obligation to provide proof for companies. As part of the general duty of care and when dealing with certain high-risk AI systems, appropriate competence building can be useful in order to minimize risks and get the best possible benefit from AI applications. Companies should therefore examine the context in which they need appropriate qualifications. In order to optimally structure the use of AI strategically and economically, it is recommended – depending on the respective company situation – to implement the following measures:
- Establish training programs and workshops for specialists and managersto ensure a basic understanding of AI applications and their regulatory requirements.
- Develop guidelines for the ethical and safe use of AIwhich affect all departments, from IT to management.
- Form interdisciplinary teams from IT, compliance and specialist departmentswho jointly evaluate AI projects and identify risks at an early stage.
- Use external expertise to evaluate and secure AI projectsespecially when implementing high-risk AI.
- Conduct regular audits and compliance checksto ensure that existing AI systems continue to comply with the requirements of the EU AI Regulation.
The measures mentioned are not mandatory, but rather business-motivated recommendations to support the responsible and successful use of AI in the company. Companies have a certain amount of leeway in how they reconcile legal requirements and economic goals. For the third measure, ethics-as-a-service approaches can also be used, in which the ethics assessment is outsourced. With regard to the fourth point, depending on the company strategy, it may also make sense to cover the risk management process internally if external expertise is not desired.
Our training offering: Practical knowledge through the CONET_AI Literacy Program
In order to meet the requirements of the EU AI regulation and promote the responsible use of artificial intelligence in companies, CONET offers a structured and practical training program: this CONET_AI Literacy Program.
The advantages at a glance:
- Modular structure: Content can be flexibly tailored to different roles in the company – from general employees to technical teams to managers.
- Practical orientation: Interactive elements such as case studies, workshops and hands-on demos make AI tangible and promote sustainable learning.
- Holistic approach: From basics to ethical and legal aspects to economic implications, the program covers all relevant subject areas.
- Regulatory preparation: Supports companies in adapting specifically to the requirements of the EU AI Regulation and other regulations.
Companies that want to enable their employees to use AI safely, competently and compliantly can find the CONET_AI Literacy Program the ideal introduction.
We also provide other consulting services on the topic of AI governance. As part of our Govern_AI service offering, we also carry out readiness checks, risk assessments and governance strategy consulting.
CONET_AI Literacy Program
Start now with the CONET_AI Literacy Program – for safe, competent and compliant use of AI in your company. Benefit from practical training modules, targeted regulatory preparation and additional AI governance advice.
Contact us
Was this article helpful to you? Or do you have further questions about the EU AI regulation? Write us a comment or give us a call.
Source: https://www.conet.de/blog/ki-kompetenzpflicht-und-die-eu-ki-verordnung-was-cios-wissen-muessen/
