On 12 July 2024, Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (the “AI Act”) has been published in the Official Journal of the European Union (the full text of the AI Act is available here).
The AI Act is a pioneering regulation aiming to establish harmonized rules for artificial intelligence within the EU. The purpose of the AI Act is to improve the functioning of the internal market by establishing a consistent legal framework for developing, marketing, and using artificial intelligence systems within the EU. It intends to promote human-centric and trustworthy AI and safeguard against harmful AI effects, as well as to support innovation and facilitate the free movement of AI-based goods and services.
The AI Act applies to (i) providers of AI systems, (ii) EU-based deployers of AI systems (defined as a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity) and deployers located in a third country, where the output produced by the AI system is used in the EU, (iii) importers, (iv) distributors, (v) product manufacturers integrating AI, (vi) authorized representatives of non-EU providers and (vii) affected persons in the EU.
1. Prohibited practices
The AI Act outlines prohibited AI practices, including:
1. Subliminal and Manipulative Techniques: AI systems that use subliminal or manipulative techniques to significantly distort a person’s behaviour, impairing their ability to make informed decisions, and causing potential harm.
2. Exploiting Vulnerabilities: AI systems that exploit age, disability, or socio-economic vulnerabilities to distort behaviour and cause harm.
3. Social Scoring: AI systems that evaluate or classify individuals based on behaviour or personal characteristics, leading to unjustified or disproportionate treatment.
4. Predictive Policing: AI systems solely based on profiling or personality traits to predict criminal offenses, except for AI systems supporting human assessment.
5. Facial Recognition Databases: AI systems that create or expand facial recognition databases through untargeted image scraping from the internet or CCTV footage.
6. Emotion Recognition: AI systems inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
7. Biometric Categorization: AI systems categorizing individuals based on biometric data to infer sensitive attributes like race, political opinions, or sexual orientation.
8. Real-Time Biometric Identification: Use of real-time remote biometric identification systems in public spaces for law enforcement, except for specific serious situations like locating missing persons, preventing threats, or identifying criminal suspects with judicial authorization.
2. High-risk AI systems
High-risk AI systems are those AI systems related to safety components, as well as those deployed in sensitive areas such as biometric identification, critical infrastructure, education, employment, and law enforcement.
The AI Act sets forth strict requirements on these systems, including full risk management processes, rigorous data governance practices, and strong transparency and documentation standards. Providers must ensure these systems are thoroughly tested, designed so as to be effectively overseen by humans, and resilient against cybersecurity threats.
3. Transparency obligations for providers and deployers of certain AI systems
The AI Act outlines transparency obligations for AI system providers and deployers. Providers must ensure that AI systems interacting with people clearly inform users that they are engaging with AI, unless it’s obvious. Providers must also mark synthetic content generated by AI as artificial.
Deployers of emotion recognition systems or biometric categorisation systems need to inform individuals about the system’s operation and comply with data protection regulations.
For deep fakes, deployers must disclose the artificial nature of the content, unless legally exempt. Information must be clear, provided at first interaction, and comply with accessibility standards. Moreover, deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated, unless legally exempt or where the AI-generated content has undergone a process of human review or editorial control.
4. General-purpose AI models
The general-purpose AI models, meaning an AI model that displays significant generality and is capable of competently performing a wide range of different tasks, must meet documentation and transparency requirements, particularly if they pose systemic risks. High-impact general-purpose AI models face additional scrutiny regarding their security.
5. Measures in support of innovation
Member States are required to establish at least one AI regulatory sandbox at national level by August 2026, potentially in collaboration with other Member States. These sandboxes provide controlled environments for the innovation, development, testing, and validation of AI systems before they enter the market. The sandboxes aim to enhance legal certainty and foster innovation.
6. Governance
The Commission shall develop EU expertise and capabilities in the field of AI through the AI Office, with support from Member States. In addition, the AI Act establishes the European Artificial Intelligence Board, comprising representatives from each Member State and various stakeholders, to ensure uniform application of AI regulations. The Board coordinates among national authorities, offers guidance, shares best practices, and aids in the implementation of AI policies. Moreover, each Member State must establish or designate at least one notifying authority and one market surveillance authority to enforce the AI Act.
7. EU database for high-risk AI systems
The AI Act also sets out the creation of an EU database containing information concerning high-risk AI systems. This database, set up with expert input, will include data entered by providers, authorized representatives, and public authority deployers. While most information in the database will be publicly available in a user-friendly, machine-readable format, some data will be restricted to market surveillance authorities and to the Commission, unless the provider consents to public access.
8. Post-market monitoring, information sharing and market surveillance
Providers of high-risk AI systems must establish a post-market monitoring system proportionate to the nature and risks of the AI technologies. This system should systematically collect, document, and analyze relevant data throughout the AI system’s lifecycle to ensure ongoing compliance with regulatory requirements.
Any natural or legal person can lodge a complaint with market surveillance authorities regarding AI regulation infringements. Affected persons have the right to clear explanations of decisions made by high-risk AI systems, except where the EU or national law provides exceptions.
9. Penalties
The AI Act provides that Member States shall lay down the rules on penalties applicable to infringements of the provisions of the AI Act and shall take all measures necessary to ensure that they are implemented, but also imposes administrative fines for various infringements of the AI Act. Fines can be substantial, reaching up to EUR 35,000,000 or, in the case of a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
10. Entry into force and application
The EU AI Regulation will enter into force on the 1st of August 2024 and will apply as of 2 August 2026. However:
• The general provisions and the provisions on prohibited AI practices shall be applicable as of 2 February 2025;
• The provisions on notifying authorities in relation to high risk AI systems, the provisions on general-purpose AI models, the provisions on governance, provisions on penalties (except for provisions related to fines for providers of general-purpose AI models) and the provisions on confidentiality shall apply as of 2 August 2025;
• The classification rules for high-risk AI systems and corresponding obligations will apply as of 2 August 2027.