Insights and Analysis

Italy’s AI Law: the good, the bad…and the actual substance

""
""

On 10 October 2025, Italy's AI Law (Law No. 132/2025) entered into force – the first comprehensive national AI framework in the EU. While its scope spans areas such as minors' protection, AI training data, copyright, and deepfakes, the real question is how far it can go within the boundaries of the AI Act. The AI Law now expressly mandates consistency with the AI Act, following the European Commission's scrutiny, and sets a governance structure relying on existing authorities like AgID and ACN. Its upcoming implementing decrees, as well as secondary legislation, will determine whether Italy's approach achieves coherence – or risks adding another layer to the European AI regulatory landscape.

Italy’s AI law: the good, the bad… and the actual substance

As widely reported, on 10 October 2025 the Italian "AI Law" (Law No. 132/2025) entered into force.

By now, it is common knowledge that this is the first comprehensive national AI framework in the EU, ranging across multiple domains (including protection of minors and parental consent, data and algorithms usage for AI training, copyright and TDM exceptions, as well as deepfakes), and that a flurry of commentary has already mapped its perimeter.

What matters now is understanding the AI Law's actual reach – especially in light of the EU Commission's intervention and guidance, which has shaped how far Italy – as well as any other Member State – can go within the harmonised framework created by the AI Act (Regulation (EU) 2024/1689).

What the AI law does (and doesn’t do) for businesses

The key for interpreting the AI law

Before focusing on individual provisions, it is necessary to set the AI Law’s interpretation right.

Article 1(1) ("Purpose and scope") and Article 3(5) ("General principles") mandate consistency with the AI Act and rule out imposing obligations beyond it – a framing that stems from the exchanges with the EU Commission in the context of (confidential) TRIS Notification 2024/0438/IT, already commented here.

These provisions are key and must be read as the AI Law's interpretative compass. They contextualise the text, temper expansive readings, and likely provide a strong ground against potential enforcements that would outpace the AI Act and/or other EU legislation to which the AI Act is expressly without prejudice (among others, Regulation (EU) 2016/679, "GDPR"; and Regulation (EU) 2022/2065, "DSA").

While this does not render the AI Law irrelevant, it means it operates within a clearly delineated perimeter set by the EU legislator, as increasingly shaped by the EU Commission and other EU institutions' guidance. Consequently, the AI Law cannot be read in isolation – its provisions, and any implementing decrees, must be construed in conformity with the EU framework.

Governance and authorities

Building on the EU Commission's observations, the AI Law no longer aims at creating a standalone, "organic" domestic framework. Rather, Article 20 explicitly positions the AI Law within the perimeter of the AI Act, once again mandating a consistent interpretation.

In outlining governance, Article 20 of the AI Law designates the Agency for Digital Italy (AgID) as the notifying authority, and the National Cybersecurity Agency (ACN) as market surveillance authority and single point of contact with EU institutions. It also confirms existing authorities – Bank of Italy, CONSOB, IVASS – as market surveillance authorities for the relevant sectors, to the extent permitted by the AI Act. At the same time, it reserves the existing powers of the Italian DPA and the Italian Communications Authority (AGCOM), which continue to operate within their respective remits. As with other EU-derived regimes, the precise allocation and coordination of competences will ultimately unfold in how enforcement actually takes place.

Sectoral provisions

Although the AI Law insists it does not add obligations beyond the AI Act, it does address areas of practical significance, covered by the AI Act itself and other EU legislation to which the AI Act is expressly "without prejudice":

  • Protection of Minors and Consent: Article 4(4) of the AI Law states that access to AI technologies by minors under 14, including processing of their data, requires parental consent and must employ clear, simple language. Despite its very poor wording, this provision cannot be read as standalone access or age-gating rule, but, consistently with the interpretative key of Articles 1(2) and 3(5), it must be interpreted in line with the GDPR and the DSA. Our take is that Article 4(4) shall be interpreted as detailing rules regarding minor consent for processing of personal data in connection to AI, limited to when consent is the applicable legal basis under the GDPR (in a fashion that mirrors Article 8 GDPR, which addresses child’s consent in the context of information society services).
  • Research and Experimentation: New rules are introduced on the processing of personal data for AI-driven research and experimentation, including the creation of dedicated testing environments and the secondary use of personal data.
  • Professional Use & Transparency: Article 13 regulates AI in professional services, stating that AI may be used only for support/auxiliary tasks, leaving the central intellectual work to humans. Also, the practitioner must inform clients of AI use in "clear, simple, and exhaustive" language.
  • Copyright & Text and Data Mining (TDM): By amending the Italian Copyright Law (Law No. 633/1941), the AI Law reinforces human authorship and, also consistently with the AI Act, extends TDM exceptions to reproductions and extractions carried out through AI systems, including generative AI. For more on this topic, see here.
  • Criminalization of Deepfake Dissemination: Article 26(1)(c) adds new Article 612-quater to the Italian Criminal Code to punish, with imprisonment of 1 to 5 years, the dissemination of AI-generated or altered images, video, or voice that mislead as to authenticity and cause unjust harm. While "dissemination" is an autonomous notion of EU Law and a consistent interpretation may further harm than those intended, its practical application shall be monitored. From a broader perspective, Article 26(1)(c) also echoes Directive (EU) 2024/1385 ("GBVD"), which narrowly criminalises certain deepfake conduct.
  • Public Administration & Judiciary: The AI Law provides that the decision-making authority in legal/judicial functions must remain human. AI may assist administrative tasks.

More to come: Implementing decrees due by October 2026

The assessment of the AI Law doesn’t stop here – it is an evolving framework.

Article 16 mandates one or more implementing decrees by October 2026 to define an "organic framework" for data, algorithms, and AI-training methods, including rights, obligations, remedies, and sanctions. Article 24 adds a broad delegation to the Government to specify the appointed authorities’ powers (notably market-surveillance and sanctioning), align sectoral legislation with the AI Act, but also to address "unlawful AI system creation and uses".

Crucially, these implementing decrees will have to be read through the same interpretive lens – e.g. alignment with the AI Act, no new obligations beyond it and the AI Act’s "without prejudice" clauses. In short, more will follow, but it must remain within the EU perimeter. The implementing phase will indeed be important and should be closely tracked and interpreted consistently with that compass.

 

 

Authored by Giulia Mariuz, Ambra Pacitti, and Anna Albanese.

View more insights and analysis

Register now to receive personalized content and more!