AI Act: obligations, timetable and corporate compliance
The AI Act (Artificial Intelligence Act) is the European regulation that regulates the use of artificial intelligence in Europe, according to its level of risk. It imposes obligations on businesses in terms of risk management, transparency and governance.
This text is a continuation of major European regulatory initiatives, such as the EIDAS 2 regulation or of Cyber Resilience Act. It has a strong impact on risk management and governance and on the organization of compliance for businesses and their AI systems. Let's take stock.
The objective of the European regulation for AI systems
The AI Act, or European AI law, is the first global legal framework aimed at regulating artificial intelligence within the European Union. The European Commission designed this text with a clear objective: to ensure that the development and use of these technologies respect fundamental rights, while stimulating innovation and protecting society.
This text seeks to establish harmonized rules for the marketing and use of solutions based on machine learning. It aims to create a trusted ecosystem, where transparency, security, data protection and quality are fundamental.
This framework doesn't try to hold back progress. On the contrary, it establishes essential legal security for vendors and the users. By setting high standards, the EU wants to become a global model for technological governance. Convergence with other texts, such as the GDPR, reinforces the overall objective of digital security and compliance in Europe.
The risk-based approach: how to classify intelligence systems?
The core of the AI Act is based on differentiated risk management. This approach determines all associated obligations and procedures.
Risk levels defined by the AI Act
The AI Act is based on a classification of artificial intelligence systems into four levels of risk:
- Unacceptable risk : systems prohibited because of their impact on fundamental rights
- High risk : highly supervised systems requiring strict compliance obligations
- Limited risk : systems subject to transparency requirements
- Minimal risk : systems without specific obligations
This classification directly determines the regulatory requirements applicable to each AI system. Let's study this in detail.
- Prohibited practices in AI: the unacceptable risk for the EU
The AI Act strictly prohibits certain uses deemed to be at “unacceptable risk.” These practices are listed in the regulation in order to protect citizens against manipulation, mass surveillance or exploitation of their vulnerabilities. Some of these prohibited AI practices include:
- The social rating systems by governments.
- The exploitation of the weaknesses of specific groups (children, disabled people).
- The real-time remote biometric identification in public spaces, with very strict exceptions.
- The subliminal techniques influencing the behavior of others.
These prohibitions aim to protect fundamental rights in the European Union.
- High-risk AI systems and their obligations
The “high risk” level is the central pillar of AI Act. These are AIs that are likely to significantly impact the life or safety of people, their access to critical services (health, education, infrastructure, HR, transport, etc.).
The associated obligations are numerous and structuring:
- Establishment of a continuous risk management system.
- Quality, reliability and security of the data feeding the models.
- Complete documentation and log keeping.
- Increased transparency with end users (obligation to provide information).
- Effective human control.
- High level of robustness and technical performance.
- Regular audits and system compliance checks.
Often, the intervention of notified bodies is necessary to validate the conformity of the systems concerned, in particular those listed in Appendix III.
- Systems with limited or minimal risk: transparency and best practices
For so-called “limited risk” AIs (e.g. chatbots, deepfake, conversational assistants), the AI Act requires above all a explicit transparency. That is to say that the user must be clearly informed about the algorithmic nature of the system or the automatic generation of content.
For minimally risky AIs (e.g. spam filters, video games, simple marketing tools), no specific obligations are planned, but the EU strongly encourages a voluntary code of conduct, following best practices in ethical innovation.
What are the obligations for AI companies and developers?
Compliance with the AI Act represents a structured and continuous approach. Model providers (GPAI) are subject to specific obligations, particularly in terms of technical documentation, transparency and, for the most advanced models, the management of systemic risks.
All processes must be documented, both for the application of European AI law and to reassure stakeholders about the security of systems. The deployer (the user company) must ensure that it is used in accordance with the supplier's instructions, continuously monitor the operation of the system, and report any incidents or quality deficiencies.
European law and data protection: RGPD and AI Act synergies
The AI Act complements (and does not replace) the GDPR, in particular through the obligation to Privacy by Design right from the design of the models. Several articles in the text require compliance with data protection at all stages of AI development and deployment.
The associated challenges are numerous:
- Database analysis training (anonymization, minimization, access and deletion rights, explainability models).
- Ongoing update of the privacy impact assessment.
- Ensuring compliance with other sectoral regulations.
Who is affected by the AI Act?
The AI Act applies to a wide range of actors, well beyond developers of artificial intelligence technologies alone.
Three main categories are concerned:
- AI system providers, who design or market solutions based on artificial intelligence on the European market.
- Deployers (or professional users), i.e. companies that integrate and use these systems in their activities (e.g. HR, marketing, customer service).
- Importers and distributors, which provide AI solutions within the European Union.
It is important to note that the AI Act also applies to companies located outside the European Union as long as their AI systems are used on the European market.
In practice, any business using artificial intelligence tools, including solutions like ChatGPT, is potentially affected by the AI Act.
Implementing the AI Act: Timeline, Governance, and Sanctions
AI Act calendar: key dates to remember
Businesses must be part of a continuous compliance and governance audit dynamic to meet the requirements of the text. The application schedule is gradual to support the maturity of internal practices:
- August 1, 2024 : entry into force of the regulation
- Early 2025 : prohibition of practices at unacceptable risk
- 2025 : requirements for general purpose models (GPAI)
- 2026 : full compliance for high-risk systems
How do you comply with the AI Act?
Compliance with the AI Act is based on a structured and continuous approach, which is part of a risk governance logic.
The compliance process is based on the following steps:
1. Mapping AI systems
Identify all the artificial intelligence solutions used in your organization, whether developed in-house or provided by third parties.
2. Classify systems according to their risk level
Analyze each system to determine if there is an unacceptable, high, limited, or minimal risk in accordance with the framework set out in the regulation.
3. Establishing appropriate governance
Define clear responsibilities, validation processes, and control mechanisms to guide the use of AI in your organization.
4. Document and trace processes
Ensure the complete traceability of data, models, and algorithmic decisions in order to meet transparency and auditability requirements.
5. Establish continuous monitoring
Monitor system performance, identify potential abuses, and adapt your devices according to changing risks.
The role of EU supervisory authorities
At European level, the European Commission Set up an AI Office responsible for overseeing general-purpose AI models and ensuring the harmonization of the application of the regulation across all Member States. At the national level, each administration designates a competent authority. In France, the CNIL should play a central role, in particular on aspects related to personal data, alongside other competent authorities depending on the sector.
Anticipating controls and structuring its governance: a strategic challenge
For the undertakings, the AI Act imposes a need for foresight and rigor in order to avoid any sanction and to build a governance perennial around artificial intelligence. The implementation of regular checks, inspired by the best practices of risk manager, makes it possible to structure and make the entire AI value chain reliable — from the design of models to their exploitation within business processes.
Anticipating controls means ensuring complete traceability, solid documentation procedures, continuous risk assessment and security measures adapted to each level of exposure, as already recommended by current standards for the security of information systems.
Structuring your AI Act compliance with a GRC approach
Faced with the complexity of the requirements of the AI Act, a tooled approach such as GRC (Governance, Risk and Compliance) makes it possible to structure the process effectively.
It is based on several pillars:
- Centralization of regulatory requirements, in order to link the obligations of the AI Act to the other applicable standards (RGPD, NIS2, DORA...).
- Mapping between risks, controls and security measures, making it possible to align regulatory requirements with operational arrangements.
- Ongoing compliance monitoring, thanks to indicators, regular audits and dynamic updating of systems.
- Reporting to stakeholders, facilitating decision-making and the demonstration of compliance with the authorities.
This approach makes it possible to transform a regulatory constraint into a lever for the strategic management of artificial intelligence.
Ai Act: pay attention to the sanctions provided
The importance of this preparation is reinforced by the scope of the sanctions provided for: up to 35 million euros or 7% of global turnover for major offenses. Penalties are graded according to the nature of the breaches.
Thus, structuring its AI governance is no longer an option but a key factor in competitiveness and resilience : the advance of controls, the integration of robust processes and the promotion of a cyber risk culture are becoming essential for any organization wishing to evolve on the European market with complete peace of mind.
Compliance with the AI Act requires a centralized view of risks, controls, and regulatory obligations. Cyber governance platforms make it possible to structure this approach and avoid management in silos.
Learn how to simplify your AI Act compliance with a GRC approach.
Sectoral overview: impact of the AI Act by area
Health, finance, education, infrastructure: increased requirements and continuous adaptation
- In the sector of health, the central challenge remains the integration of AI solutions capable of guaranteeing data security, the complete traceability of algorithmic operations, as well as the proactive management of risks related to the quality of care and the protection of privacy. Risk assessment processes and compliance audit rely on expert data governance approaches adapted to the sensitivity of medical information.
- In the field of finance, the increasing use of AI tools in analysis, fraud detection or investment management requires compliance with reference standards such as the PCI-DSS or DORA, which complement the regulated management of high-risk AIs required by the AI Act. Harmonization with ISO 22301 on business continuity and cyber resilience is also becoming a major focus in securing the financial ecosystem.
- In the education sector, AI systems must ensure the fairness, transparency of scoring or orientation algorithms, and ensure the increased protection of student data. This requires a high level of traceability and a policy of regular evaluation of social and ethical impact, in line with the latest developments in algorithmic governance.
- Critical infrastructures, finally, combine the need for a high level of operational security with very strict management of access to sensitive information. Here, the expectation of technological risks and the implementation of continuous controls are at the heart of compliance approaches.
Opportunities and challenges for businesses on the European market
The AI Act is also a great opportunity to get ahead on the international scene, provided you invest in cyber maturity and internal governance. The undertakings the most agile will benefit from differences in practices with the rest of the world.
To ensure excellence, the support of a group of experts, the integration of tooled solutions, and dialogue with the competent authorities are recommended.
Preparing for the AI Act: best practices, tools and support
Preparing effectively for the AI Act requires a structured approach in several steps, which must be integrated at all levels of the organization. Here are the main axes to anticipate and succeed in your compliance:
Start by taking a comprehensive inventory of all of your digital assets and AI systems
- Identify where your algorithms are, what is the associated level of documentation (procedures, logs, contracts...) and assess the quality, reliability and security of your models.
- Also check what processes are already in place for risk assessment and incident management. Rely on experienced risk managers or on a Information Systems Security Manager is a valuable tool for making this diagnosis more reliable.
Adapt and strengthen your internal policies
Review and update all of your security policies (physical and logical), user access and rights, crisis management, business continuity. It is crucial that your cybersecurity, risk management, compliance and business teams collaborate from the moment AI solutions are designed or modified, in order to anticipate all regulatory requirements.
Develop a training and internal communication program
The success of compliance depends on the involvement of all employees. In particular, implement targeted awareness-raising actions., adapted to each profession, by relying on training modules, practical workshops and internal guides on AI, data protection and new regulatory systems
Establish regulatory and technical monitoring
- Follow closely the evolution of European texts, recommendations from the European Commission, and sectoral best practices to best anticipate future adaptations.
- Update your repositories and documentation as soon as new guides or standards appear. As the AI Act is set to evolve, this Regulatory watch active is a guarantee of sustainability for your compliance investments.
Integrate AI compliance into the overall risk management strategy
Compliance with the AI Act should be thought of as a transversal project that dialogues with your other obligations (e.g.: RGPD, NIS2, ISO, PCI DSS). The sharing of standards, the centralization of audits and the capitalization of the experience acquired on other regulatory projects will save your teams time and efficiency.
By adopting this step-by-step approach, your organization will be equipped to anticipate controls, ensure compliance and transform regulatory constraints into real strategic levers.
Do you want to see concretely how to simplify and automate your compliance with the AI Act? Ask for your personalized demo.
FAQ: all you need to know about the AI Act
What is the AI Act in summary?
The AI Act is the new European Union regulation aimed at providing a harmonized framework for the creation, marketing and use of artificial intelligence systems. It imposes a Classification of AIs according to four risk levels : unacceptable, high, limited and minimal. The aim is to ensure security, transparency and respect for people's fundamental rights, while supporting responsible innovation on the European market.
What are the penalties for non-compliance?
The financial sanctions provided for in the AI Act are graded according to the severity of the breaches observed. For example, provide inaccurate or incomplete information during checks may result in fines of 7.5 million euros (or 1.5% of global annual turnover). The most severe offenses, such as the use of AI at unacceptable risk or the violation of prohibitions, can be punished up to 35 million euros or 7% of turnover.
My business uses generative AI tools like ChatGPT, are we concerned?
Yes, in practice, many businesses using AI tools like ChatGPT may be affected. It is your responsibility to ensure that the use of this AI complies with regulations, in particular concerning data protection, transparency towards users and internal compliance obligations. This often involves updating your procedures, raising awareness among your teams and monitoring the use of the tool.
What is the difference between the AI Act and the GDPR?
The GDPR regulates the collection and processing of personal data, guaranteeing the fundamental rights and privacy of individuals. The AI Act aims to specifically regulate the development, use, and marketing of AI systems, regardless of whether or not they handle personal data. The two texts are complementary and apply together : double compliance must therefore be ensured, especially in terms of security, documentation and algorithmic transparency.



