by
Laila Mohamed Ali
CyJurII Scholar
on 6 October 2025
1. Introduction
The EU Artificial Intelligence Act (AI Act), adopted On June 13, 2024, is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
The AI Act is a part of a wider package of policy measures that sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI
It aims to ensure trustworthy, human-centric, and safe AI across the European Union while promoting innovation. The regulation entered into force on 1 August 2024, with gradual application from 2025 to 2027.
The Act applies to:
• AI systems developed or deployed within the EU or whose output affects the EU market.
• Providers, deployers, importers, and distributors of AI systems.
Military and research uses are generally excluded.
2. Risk-Based Framework
The Act classifies AI systems into four risk levels:
1. Unacceptable Risk (Prohibited) | e.g. social scoring, Individual criminal offence risk assessment or prediction, manipulation, or emotion recognition in workplaces.
2. High Risk | used in critical areas like health, education, and law enforcement; strict compliance required.
3. Limited Risk | transparency duties while providers of generative AI have to ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely deep fakes and text published with the purpose to inform the public on matters of public interest.
4. Minimal Risk | few or no obligations.
3. Key Obligations
For high-risk AI, providers must:
• Establish a risk management system throughout the high risk AI system’s lifecycle;
• Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
• Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
• Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
• Provide instructions for use to downstream deployers to enable the latter’s compliance.
• Design their high risk AI system to allow deployers to implement human oversight.
• Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
• Establish a quality management system to ensure compliance.
4. Conclusion
In conclusion, the EU Artificial Intelligence Act (2024) marks a historic milestone in the global governance of AI, being the first comprehensive and binding legal framework to regulate artificial intelligence based on its risk level. By introducing a proportionate, human-centric, and innovation-friendly approach, the Act aims to ensure that AI technologies serve society while respecting fundamental rights, transparency, and accountability.
The regulation’s gradual implementation from 2025 to 2027 allows both public and private sectors to adapt effectively, promoting legal certainty and trust in the AI market. Its structured obligations set a strong precedent for responsible AI development, ensuring safety, fairness, and ethical use.
Ultimately, the AI Act not only strengthens the European Union’s digital sovereignty but also sets an international benchmark for AI regulation, inspiring other jurisdictions to adopt similar frameworks that balance technological progress with ethical governance.
References:
• High-level summary of the AI Act | EU Artificial Intelligence Act
• AI Act | Shaping Europe’s digital future
• Long awaited EU AI Act becomes law after publication in the EU’s Official Journal | White & Case LLP
• https://www.ibm.com/think/topics/eu-ai-act?
• https://www.crowell.com/en/insights/publications/eu-artificial-intelligence-act