Contents of the Compromise Reached on the AI Act
In early February, EU Member States reached a final agreement on the regulation establishing harmonized rules on artificial intelligence. This milestone comes after two and a half years of intense negotiations between the European Parliament, the Council, and the European Commission.
The definition adopted by the EU mirrors that of the OECD and considers an AI system to be "an automated system designed to operate with varying levels of autonomy and adaptability after deployment, which, for explicit or implicit objectives, infers how to generate outputs—such as predictions, content, recommendations, or decisions—that can influence physical or virtual environments, based on received inputs."
The regulatory approach is based on classifying AI systems according to their risk level to “people’s health, safety, or fundamental rights,” divided into three main categories. Requirements are proportional to the level of risk:
- Unacceptable-risk AI systems will be banned from the market.
- High-risk AI systems must comply with obligations such as CE marking and conformity assessments, implementation of risk and data management systems, technical documentation, transparency, and incident management.
- Low or minimal-risk AI systems must fulfill information obligations toward users or adhere to voluntary codes of conduct.
These new rules apply across all sectors, using a use-case-based approach to determine whether an AI system qualifies as high-risk. The list of high-risk use cases includes:
- Biometric identification
- Management and operation of critical infrastructure (e.g., transport, water, gas)
- Safety components of products (e.g., AI-assisted robotic surgery)
- Essential public and private services (e.g., credit scoring impacting access to loans)
- Law enforcement applications (e.g., lie detectors, recidivism risk assessment)
The regulation incorporates ethical concerns, particularly through transparency and human oversight throughout the lifecycle of AI systems. Obligations vary depending on the role of the party in the value chain: Manufacturer, Importer, Provider (the most heavily regulated), User/Deployer, or Distributor.
The EU AI regulation applies to both public and private actors inside and outside the EU (extra-territorial effect), as long as the AI system is marketed in the EU or impacts individuals located in the EU.
The regulation will apply two years after its official entry into force (expected by the end of Q1 2026). However, the ban on unacceptable-risk AI will apply as early as six months after entry into force. Rules concerning general-purpose AI, governance, notified bodies, and sanctions will apply twelve months after entry into force.
The level of fines depends on the type of infraction:
- €35 million or 7% of global annual turnover for violations involving banned AI applications
- €15 million or 3% of global annual turnover (2% for SMEs or startups) for failure to comply with other obligations and for general-purpose AI models
- €7.5 million or 1.5% of global annual turnover for providing incorrect information
AI in Cybersecurity: Both Risk and Opportunity
Generative AI is now seen as a major opportunity for business growth. In terms of cybersecurity, AI’s capabilities and process automation are especially promising in the following use cases:
- Accelerating risk analyses by feeding data into databases (although the quality and usability of the data must be ensured)
- Detecting incidents or anomalies, such as suspicious behavior, fraud, or vulnerabilities, while reducing false positives
- Responding to cyber threats, especially by prioritizing remediation actions
- Risk evaluation
AI can therefore enhance the operational efficiency of IT and cybersecurity teams. These productivity gains are likely to free up human resources for higher-value projects, rather than replace them.
However, the benefits of AI can only be realized with a well-defined technical and governance framework to regulate and secure its use. These foundations are critical for building trust and ensuring system security.
AI inherently poses risks as well as opportunities. For example, by enabling more sophisticated tags, AI could increase the frequency and intensity of cyberattacks.
Identified risks include:
- Data leakage or exposure
- Uncertainty over result reliability or lack of transparency in scenario modeling
- Loss of expertise, reasoning ability, or control
Cybersecurity: A Catalyst for AI
One of the key challenges of AI systems, particularly high-risk ones, lies in their cybersecurity. It is crucial to secure AI systems against cyber threats.
The potential of AI can be misused: cyberattacks targeting AI systems may leverage AI-specific assets such as training datasets (e.g., data poisoning) or trained models (e.g., adversarial attacks), or exploit vulnerabilities in the digital components of the AI system or its underlying ICT infrastructure.
Cybersecurity thus plays a vital role in ensuring that AI systems are resilient against malicious attempts to manipulate, misdirect, degrade, or compromise their behavior or safety properties.
The European AI regulation mandates that providers of high-risk AI systems must ensure a level of cybersecurity appropriate to the risks. This includes securing the AI models themselves and the underlying IT infrastructure.
Given the inseparable nature of AI’s risks and opportunities, trust is a central issue, and cybersecurity is a key success factor for fostering trust in AI.