News

European Commission launches public consultation on high-risk AI systems

Image
Image

While some voices are asking for a pause in the implementation schedule of the EU AI Act, the EU Commission is required by the EU AI Act to issue guidelines to facilitate practical implementation of the Regulation. After having published thfirst two guidelines on Prohibited AI Practices (4 February 2025) and on the Definition of an AI System (6 February 2025), it is now launching a new public consultation on high-risk AI systems. 

Active stakeholder engagement is critical at this stage to ensure that the implementation of the AI Act reflects practical realities and sector-specific needs. Active participation may directly influence how complex technical provisions are interpreted and applied. 

The public consultation is structured around 5 sections: 

Section 1 analyzes the classification rules of high-risk AI systems in Article 6(1) and Annex I of the AI Act

Under Article 6(1) of the AI Act, an AI system is deemed high-risk if both of the following criteria are met:

  1. It is intended to serve as a safety component of a product, or it constitutes a product in its own right, covered by the Union harmonization legislation listed in Annex I; and 
  2. The product—whether the AI system as a safety component or the AI system itself—is subject to a mandatory third-party conformity assessment before being placed on the market or put into service under that same Annex I legislation.  

This section of the public consultation includes a series of questions examining what constitutes a “safety component,” the interplay with other relevant laws and regulations, and the identification of AI components or products subject to third-party conformity assessments.

Section 2 deals with the classification rules of high-risk AI systems in Annex III of the AI Act

Under Article 6(2) of the AI Act, an AI system is deemed high-risk if it is referred to in Annex III of the AI Act. Article 6(3) of the AI Act contains exemption conditions for AI systems mentioned in Annex III to not be considered as high-risk AI systems.

This section of the public consultation contains sector-specific questions designed to map various use cases and clarify their classification, as well as their interaction with national and Union legislation. E.g.Do you have or know practical examples of AI systems related to biometrics where you need further clarification regarding the distinction from prohibited AI systems?”

Input from practitioners in each target sector is essential to ensure the regulatory framework is practical, readable, and relevant. Now is the time for operational experts to engage so that rules can be tailored appropriately and to avoid unpredictable regulatory burdens, which could hinder the AI development within EU businesses.

Section 3 addresses general classification issues

This section includes open-ended questions aimed at identifying where further clarification is needed. E.g.What aspects of the definition of the intended purpose, as outlined in Article 3(12) AI Act, need additional clarification? Please specify the concrete elements and the issues for which you need further clarification; please provide concrete examples.” 

This questionnaire offers the valuable opportunity, which would otherwise only be possible through litigation and before a Court, for organizations to address issues they may have identified during their legal analysis of the AI Act, potentially in the context of specific AI projects. This is an opportunity to obtain clarifications from the EU Commission on any grey areas. 

Section 4 deals with requirements and obligations related to high-risk AI systems and value chain obligations

High-risk AI systems are required to comply with some obligations in areas such as risk management, data governance, documentation, transparency, human oversight, and security, as outlined in Articles 9 through 15 of the AI Act. Compliance with these requirements should be guaranteed prior to placing the high-risk AI systems on the EU market. Harmonized standards, currently being developed by CEN and CENELEC, will provide a "presumption of conformity.

The questionnaire includes specific sections regarding AI systems, providers, and deployers. The questions are open-ended and aim to determine whether further clarification on these topics is needed, potentially through guidelines, and whether guidance is required on the interaction with specific Union legislation. 

Furthermore, the document includes a subsection on the concept of 'substantial modification' as outlined in Article 25(1) of the AI Act, as well as another subsection detailing the roles and obligations within the value chain. 

Section 5 deals with the assessment and potential amendments of high-risk use cases in Annex III and prohibited practices in Article 5 of the AI Act

This section of the public consultation includes questions about identifying specific AI systems that may need to be added to or removed from the list of high-risk use cases, as well as those requiring prohibition due to conflicts with Union values and fundamental rights. Additionally, it seeks input on whether existing prohibitions in Article 5 are adequately covered by other Union legislation. 

Stakeholders are invited to contribute to the ongoing public consultation, with a view to clarifying complex technical aspects and ensuring that the obligations imposed under the AI Act are proportionate and appropriately aligned with their respective functions within the AI value chain. 

Stakeholders have until 18 July 2025 to submit their input for this public consultation on high-risk AI systems. 

 

 

 

 

Authored by Etienne Drouard, Olga Kurochkina, and Sarina Singh.

Next steps

18 July 2025: Deadline for stakeholders’ input on high-risk AI systems

View more insights and analysis

Register now to receive personalized content and more!