
Judgment in the Cloud: The future of risk and regulation with James Lord, Google Cloud
On February 6, 2025, the European Commission published guidelines clarifying the definition of an “AI system” under the EU AI Act. Only systems that meet this definition fall within the AI Act’s scope, making it a key reference for compliance. The document outlines core characteristics – such as autonomy and inference – that help assess whether a system qualifies. Though not legally binding, the guidelines reflect contributions from stakeholders and interpretation from the European AI Board. A separate public consultation on high-risk AI systems is underway since June 6, 2025, showing the Commission’s continued efforts to shape the interpretation of the AI Act. Furthermore, general-purpose AI (GPAI) models are still expecting last-minute critical guidance from the EU AI Board before the entry into force of the AI Act’s specific rules on August 2, 2025.
The definition of an AI system under the AI Act is structured around seven key elements listed in Article 3(1) of the AI Act: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
The guidelines aim to elucidate the meaning, scope, and practical implications of each of these elements, thereby facilitating consistent interpretation and application across use cases.
The EU Commission explicitly excludes the following tools:
An AI system must run on machines, including both hardware & software. Based on the guidelines:
Examples mentioned include traditional servers, quantum computers, or even biological systems capable of computation.
An AI system can perform tasks with a “reasonable” degree of independence from human intervention.
The notion of autonomy under the AI Act encompasses a broad spectrum, ranging from human-in-the-loop systems, which require ongoing human oversight or input, to fully autonomous systems capable of operating without any human involvement. The level of autonomy is an important consideration for a provider when devising, for example, the system’s human oversight or risk mitigation measures in the context of the intended purpose of a system.
Importantly, a system that is designed to automatically generate outputs based on inputs, without those outputs being explicitly specified or controlled by a human, satisfies the criterion of autonomy as defined under the AI Act.
However, systems that are “designed to operate solely with full manual human involvement and intervention” – either direct (manual controls) or indirect (automated systems-based controls) – are excluded from the definition of AI systems, as per recital 12.
Adaptiveness is not a required condition for a system to qualify as an AI system, but it constitutes an additional element of the analysis. The system may have the ability to learn and adapt – in other words “possess adaptiveness or self-learning capabilities” – as it “automatically learns, discovers new patterns, or identifies relationships in the data beyond what it was initially trained” after deployment.
An AI system operates according to one or more objectives, which can be explicit or implicit.
Explicit objectives may be specified as the optimization of some cost function, a probability, or a cumulative reward.
Implicit objectives are goals that are not explicitly stated but may be deduced from the behavior or underlying assumptions of the AI system. These may stem from the training data or emerge from interactions of the AI system with its environment.
The system infers outputs from inputs using AI techniques, distinguishing it from simple rule-based software.
Examples of AI techniques include:
The ability of an AI system to generate outputs, such as predictions, content, or recommendations, based on the inputs it receives, using machine learning or logic- and knowledge-based techniques, is central to the functionality of AI systems and is what fundamentally distinguishes them from traditional software.
Examples of generated outputs include:
An AI system’s outputs can produce tangible effects in both physical and digital domains.
For example:
Authored by Etienne Drouard, Olga Kurochkina, and Sarina Singh.
July 18, 2025: Deadline for stakeholders' input on high-risk AI systems public consultation of the EU Commission (for more information click here)
August 2, 2025: Deadline for the AI Board's publication of the guidance on GPAI models