News

What is an AI System? The EU Commission’s guidance

""
""

On February 6, 2025, the European Commission published guidelines clarifying the definition of an “AI system” under the EU AI Act. Only systems that meet this definition fall within the AI Act’s scope, making it a key reference for compliance. The document outlines core characteristics – such as autonomy and inference – that help assess whether a system qualifies. Though not legally binding, the guidelines reflect contributions from stakeholders and interpretation from the European AI Board. A separate public consultation on high-risk AI systems is underway since June 6, 2025, showing the Commission’s continued efforts to shape the interpretation of the AI Act. Furthermore, general-purpose AI (GPAI) models are still expecting last-minute critical guidance from the EU AI Board before the entry into force of the AI Act’s specific rules on August 2, 2025.

The definition of an AI system under the AI Act is structured around seven key elements listed in Article 3(1) of the AI Act: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

The guidelines aim to elucidate the meaning, scope, and practical implications of each of these elements, thereby facilitating consistent interpretation and application across use cases.

1. Not AI systems – out of scope of the AI Act

The EU Commission explicitly excludes the following tools:

  • Mathematical optimization tools: systems like linear regression or traditional physics models – even if enhanced by machine learning for speed – are not considered AI systems;
  • Basic data processing tools: programs that sort/filter data (e.g., Excel formulas, SQL queries) without any learning or inference capabilities;
  • Classical heuristics: rule-based tools (e.g., chess minimax algorithm) that lack learning capabilities;
  • Simple prediction systems: systems that predict using basic statistical rules (e.g., calculating average temperatures) and do not adapt or evolve over time.

2. A machine-based system

An AI system must run on machines, including both hardware & software. Based on the guidelines:

  • “The software components encompass computer code, instructions, programs, operating systems, and applications that handle how the hardware processes data and performs tasks.
  • The hardware components refer to the physical elements of the machine, such as processing units, memory, storage devices, networking units, and input/output interfaces, which provide the infrastructure for computation.”

Examples mentioned include traditional servers, quantum computers, or even biological systems capable of computation.

3. Autonomy

An AI system can perform tasks with a “reasonable” degree of independence from human intervention.

The notion of autonomy under the AI Act encompasses a broad spectrum, ranging from human-in-the-loop systems, which require ongoing human oversight or input, to fully autonomous systems capable of operating without any human involvement. The level of autonomy is an important consideration for a provider when devising, for example, the system’s human oversight or risk mitigation measures in the context of the intended purpose of a system.

Importantly, a system that is designed to automatically generate outputs based on inputs, without those outputs being explicitly specified or controlled by a human, satisfies the criterion of autonomy as defined under the AI Act.

However, systems that are “designed to operate solely with full manual human involvement and intervention” – either direct (manual controls) or indirect (automated systems-based controls) – are excluded from the definition of AI systems, as per recital 12.

4. Adaptiveness – optional condition

Adaptiveness is not a required condition for a system to qualify as an AI system, but it constitutes an additional element of the analysis. The system may have the ability to learn and adapt – in other words “possess adaptiveness or self-learning capabilities” – as it “automatically learns, discovers new patterns, or identifies relationships in the data beyond what it was initially trained” after deployment.

5. AI system objectives

An AI system operates according to one or more objectives, which can be explicit or implicit.

Explicit objectives may be specified as the optimization of some cost function, a probability, or a cumulative reward.

Implicit objectives are goals that are not explicitly stated but may be deduced from the behavior or underlying assumptions of the AI system. These may stem from the training data or emerge from interactions of the AI system with its environment.

6. Inferencing how to generate output using AI techniques

The system infers outputs from inputs using AI techniques, distinguishing it from simple rule-based software.

Examples of AI techniques include:

  • Supervised learning: e.g., spam filters;
  • Unsupervised learning: e.g., drug discovery;
  • Self-supervised learning: e.g., language models that predict the next word;
  • Reinforcement learning: e.g., autonomous vehicles or robot arms;
  • Deep learning: using layered architectures (neural networks) for representation learning;
  • Logic/knowledge-based systems: e.g., symbolic reasoning in expert systems like medical diagnosis tools.

7. Generating outputs

The ability of an AI system to generate outputs, such as predictions, content, or recommendations, based on the inputs it receives, using machine learning or logic- and knowledge-based techniques, is central to the functionality of AI systems and is what fundamentally distinguishes them from traditional software.

Examples of generated outputs include:

  • Predictions, e.g., real-time predictions that are made by AI systems deployed in self-driving cars;
  • Content, e.g., text, image, or video content (such as those generated by ChatGPT);
  • Recommendations, e.g., suggested actions, responses or choices;
  • Decisions, e.g., selections or determinations made by the system.

8. Generated outputs can influence physical or virtual environments

An AI system’s outputs can produce tangible effects in both physical and digital domains.

For example:

  • Physical impact: a robot arm or a self-driving car making a movement;
  • Virtual impact: an algorithm modifying the content (e.g., news or ads) shown to a user.

 

 

Authored by Etienne Drouard, Olga Kurochkina, and Sarina Singh.

Next steps

July 18, 2025: Deadline for stakeholders' input on high-risk AI systems public consultation of the EU Commission (for more information click here)

August 2, 2025: Deadline for the AI Board's publication of the guidance on GPAI models

View more insights and analysis

Register now to receive personalized content and more!