Insights and Analysis

Initial reflections on Law 132/25: Italy's approach to AI

Data center
Data center

Key takeaways

Law 132 aims to transpose and complete the legislative framework define by the AI Act.

Whilst the opening provisions relate to the general principles inspired AI systems and models, Law 132’s specific provisions cover five areas: national strategy, national authorities, promotional activities, copyright protection, and criminal penalties.

It also provides for a delegation to the government to bring national legislation into line with the AI Act in areas such as AI literacy for citizens and training by professional associations for professionals and operators.

On September 26, Law No. 132 of September 23, 2025, was published in the Official Gazette, containing provisions and delegating powers to the Government in the field of artificial intelligence (AI). Law 132 had a rather complex gestation period and aims to transpose and complete the legislative framework defined by the AI Act. At first glance, Law 132 appears to be more of a list of statements of principle, within which, however, there are some interesting provisions that deserve consideration.

The first consideration concerns the opening provision of Law 132 relating to the general principles that should inspire AI systems and models. Fundamental rights expressly mentioned include non-discrimination, gender equality, and sustainability. In addition, explicit reference is made to the transparency, knowability, and explainability of systems and models, as well as respect for human autonomy and decision-making power. Although these concepts already form the basis of the AI Act (which has not yet been formally approved), I believe it is important to reiterate that AI is a tool to help humans, not to replace them, to ensure inclusiveness and respect for diversity, not to ghettoize or homogenize, to understand and include different languages and make them protagonists, not relegate them to social costs. To quote American economist David Autor, what if, instead of triggering the end of the world, AI served to make the world a better place? What if, instead of accelerating automation and replacing humans with machines, AI were a balancing factor to extend skills to disadvantaged workers, to ensure equal opportunities, to smooth out discrimination, and to reduce social inequality? What if AI were a tool for achieving the substantive equality sought by Article 2 of the Constitution?

In this regard, two provisions of Law 132 contained in the chapter dedicated to sector regulations are worth mentioning, among others. The first is Article 7, paragraph 4, relating to AI in healthcare and disability, which promotes the development and dissemination of AI systems and models that improve the living conditions of people with disabilities and facilitate accessibility, autonomy, and social inclusion processes. The second is Article 10 on AI in the world of work, which states that AI systems and models should be used to improve working conditions and the mental and physical well-being of workers, as well as to increase performance quality and productivity. In this regard, the following provision provides for the creation of an Observatory at the Ministry of Labor to monitor the consequences of the impact of AI systems and models on the world of work. In addition to sector-specific provisions, Law 132 covers five areas: national strategy, national authorities, promotional activities, copyright protection, and criminal penalties. It also provides for a delegation to the government to bring national legislation into line with the AI Act in areas such as AI literacy for citizens (both in schools and universities) and training by professional associations for professionals and operators. The delegation also concerns the reorganization of criminal law to adapt crimes and penalties to the illegal use of AI systems. Beyond the statements of principle—with which one cannot disagree—there are still, in my opinion, some things that could be improved with a view to the coherent and effective development of AI in Italy.

What convinces me least are the provisions in the field of healthcare and scientific research.

Article 8, paragraph 1 of Law 132 stipulates that the processing of data, including personal data, carried out by public and private non-profit entities, scientific hospitals and research institutes (IRCCS), as well as by private entities operating in the healthcare sector within the scope of research projects involving public and private non-profit entities or IRCCS, for the purposes of prevention, diagnosis, and treatment of diseases, development of drugs, therapies and rehabilitation technologies, the manufacture of medical devices, public health, personal safety, health and health safety are declared to be of significant public interest, including for the purposes of Article 9(2)(g) of the GDPR. This means, therefore, that the use of patient data for training AI algorithms in the field of medical and scientific research has a specific legal basis in the public interest and it is not necessary to seek consent from the data subjects.

Given the acceptable principle set out above, however, there are two aspects of the provisions introduced by the same article that should be examined from a constitutional point of view.

The first concerns the fact that, incomprehensibly, the provision applies only to public entities and non-profit organizations, or to private individuals provided they participate in research projects with public entities or IRCCS (Scientific Institutes for Research, Hospitalization and Healthcare), leaving uncovered a very significant part of scientific research carried out by private institutions or even pharmaceutical companies that act as sponsors for studies and research that require funds that public entities are unable to provide.

The second, even more serious, in my opinion, is that, after considering the processing of the personal data in question as being in the public interest and expressly referring to the legal basis set out in Article 9(2)(g) of the GDPR, Article 8(5) of Law 132 introduces an authorization regime for the secondary use of medical data that is superfluous – and, in my opinion, unlawful – with respect to the EU principle that recognizes the legal basis for processing as being in the public interest. It is also clearly inconsistent with the recent amendment to Article 110 of the Privacy Code, approved during the conversion of the PNRR Decree Law into law (an amendment that is not without criticism, however, as it is too weak in relation to the needs of the sector).

The aforementioned provision stipulates that the use of patients' personal data for AI systems, in addition to the approval of the relevant ethics committees, must be communicated to the Data Protection Authority and that processing may begin thirty days after the aforementioned communication, unless blocked by the Authority itself. This provision introduces an authorization regime in the form of tacit consent, which is unjustified under Article 9(2)(g) of the GDPR, expressly referred to in paragraph 1 of Article 8 cited as the (sole) legal basis for the secondary processing of patients' medical data for the purposes of developing AI systems for the treatment and prevention of diseases.

There cannot be two alternative legal bases. Either the basis is the law which, as in this case, recognizes an overriding public interest on the part of public and private entities operating in the medical-scientific field, or the processing remains prohibited but may be authorized by the administrative supervisory authority, i.e., the Privacy Guarantor.

Moreover, paragraphs 2 and 3 of Article 8 already indicated a list of guarantees necessary for the processing of data in compliance with privacy regulations, such as the need for clear and transparent information and authorization for the processing of medical data for the purposes of anonymization, pseudonymization, or synthesis of personal data in the context of the creation and development of artificial intelligence systems in the healthcare sector.

The subsequent paragraph 5, which introduces the aforementioned authorization regime, seems unnecessary and harmful, even from a graphic and logical-legal consistency point of view.

It is useless because it does not serve the sacrosanct activity of control by the Guarantor over the processing of particularly intrusive personal data, guaranteed, first of all, in paragraph 4, where it is stated that the National Agency for Regional Health Services (AGENAS), after consulting the Data Protection Authority, taking into account international standards and the state of the art and technology, may establish and update guidelines for procedures for the anonymization of personal data and for the creation of synthetic data, including for categories of data and purposes of processing; and, subsequently, in the closing paragraph 6, where it is specified that the inspection, prohibition, and sanctioning powers of the Data Protection Authority remain unchanged.

This is harmful because it risks slowing down the development of AI systems in the medical-scientific field, increasing bureaucracy and reducing the competitiveness of Italian companies or the entire research and development system, also with respect to other Member States.

Furthermore, the system designed by Law 132 appears to be completely inconsistent with the aforementioned recent reform of Article 110 of the Privacy Code, which eliminated the need for prior consultation with the Data Protection Authority in the case of secondary use of medical data for medical research, referring instead to compliance with the sector's ethical rules, which the Data Protection Authority will have to adopt on the basis of its own competences. The inconsistency between the two regulations, approved in quick succession, is clear.

Paragraph 5 is equally incomprehensible when considered in relation to the subsequent Article 9 of Law 132, which, in considering the processing of personal data, including data of a sensitive nature, to be legitimate in the context of the development of AI and machine learning systems, instructs the Ministry of Health to issue, within 120 days of the date of entry into force of the law, scheduled for October 10, after consulting the Data Protection Authority, research institutions, health authorities, and other authorities and operators in the sector.

Leaving aside for now any comment on the decision to entrust Agid and Acn with the role of supervisory authority for AI, which has been the subject of recent controversy, particularly with the Privacy Guarantor, one final provision of Law 132 that deserves attention is that relating to investments to promote the development of AI-based technology. The relevant news is that investments totalling €1 billion are planned in the fields of AI, cybersecurity, and quantum computing in telecommunications and enabling technologies, in order to promote the development, growth, and consolidation of companies operating in these sectors. However, those who were expecting a more specific list of concrete measures were disappointed. We will probably have to wait for the delegated decrees. Even the amount allocated does not seem to match what other Member States are doing (I would leave aside the US and China, which are playing in a different league). France, under Macron, has announced investments of around €109 billion over the next few years and the creation of 35 new data centres. Germany has placed AI at the centre of its economic strategy with the stated goal of reaching 10% of GDP by 2030 (new supercomputing centres, European funds, and quantum computing to close the gap with the US and China). It is true that in order to compete with the US and China, in addition to common regulations, a common investment plan by the EU is also needed. In this sense, the words of Mario Draghi, who called for EU investments of €500 billion for technological development, take on even greater value. Of course, this is a political game for which we will have to wait for the next EU parliament in the hope that there will be a compact and cohesive majority capable of expressing a common industrial policy without delay, a significant part of which will consist of the completion of the 2030 digitization strategy initiated by the current EU Commission. The hope is that Italy will do its part beyond proclamations and statements of principle.

Authored by Massimiliano Masnada.

View more insights and analysis

Register now to receive personalized content and more!