4 minute read

The EU Artificial Intelligence Act A step toward regulating artificial intelligence

Virginie COLAIUTA, Partner, LMS Legal LLP and Basil Thévignot, Associate, LMS Legal LLP

Artificial Intelligence (“AI”) has become an integral part of modern life, influencing sectors ranging from healthcare to finance, and transportation to customer service.

AI systems are designed to analyse vast amounts of data, recognise patterns, and make decisions without direct human control. However, as AI’s capabilities expand, so do the risks associated with its use. These risks include errors, bias, privacy invasion, and even harmful decisionmaking. Consequently, clear regulations are necessary to ensure AI’s responsible use while fostering innovation. In response to these concerns, the European Union introduced the EU Artificial Intelligence Act, Regulation (EU) 2024/1689 (“EU AI Act”) in 2024, aiming to regulate AI systems and establish a comprehensive legal framework for their deployment and use within the EU. This article provides an overview of the Act.

The EU AI Act: a pioneering risk-based regulatory framework

The EU AI Act is the first comprehensive legal framework to regulate AI within the European Union. Published on 12 July 2024, the Act will apply from 2 August 2026 (Article 113). Its primary goal is to ensure that AI technologies are developed and used in ways that are safe, ethical, and transparent. The Act applies to all AI providers (developers of AI systems) and deployers (users of AI systems for business purpose) whose systems impact individuals within the EU, regardless of whether they are located in the EU.

The AI Act adopts a risk-based approach, categorizing AI systems according to their potential risks. These categories are:

1. Unacceptable risk (Article 5): certain AI applications are outright banned, particularly those that manipulate or deceive users or exploit vulnerabilities based on age, disability or a specific social or economic situation.

2. High risk (Articles 6 to 49): AI systems used in critical sectors such as law enforcement, healthcare, and transportation are subject to stringent requirements.

3. Low risk (Article 50): AI systems that present minimal risks are only subject to transparency requirements, ensuring users are informed when they interact with AI and that outputs are clearly marked as artificially generated.

For example, AI systems used in biometric identification, critical infrastructure, or education will fall under high-risk categories and must meet specific compliance standards, including data governance, technical documentation, and robust cybersecurity protocols. In contrast, low-risk AI systems used in non-critical sectors that generate deepfake content or interact with natural persons directly will be required to disclose their artificial nature to users.

Key provisions of the EU AI Act

The AI Act introduces several key provisions to address the risks AI poses to fundamental rights, in particular for high-risk AI systems. These include:

- Data governance (Article 10): providers must implement rigorous data management practices, ensuring that datasets used to train AI systems meet quality, transparency, and security standards. Transparency (Article 13): AI systems must provide users with clear information including as to their intended purpose, level of accuracy and technical capabilities.

- Human oversight (Article 14): high-risk AI systems must be designed to ensure effective human oversight, allowing natural persons to intervene when necessary.

- Accuracy, robustness and cybersecurity (Article 15): AI systems must be designed and developed in a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity.

- Sanctions and penalties (Article 99): non-compliance with the Act can lead to severe penalties, including fines up to €35 million or 7% of global annual turnover, whichever is higher.

Importantly, the Act provides clarity regarding the role of AI systems in society, setting out clear rules for their development, deployment, and monitoring.

Conclusion

The EU AI Act represents a significant step in addressing the ethical, safety, and legal challenges posed by artificial intelligence. By establishing clear rules for high-risk and lowrisk AI systems, the Act provides a framework that ensures AI systems are developed and deployed responsibly. As AI technologies continue to evolve, effective regulation will be crucial in balancing innovation with public safety and rights.

This article is from: