I asked her to self -refine herself. And he replied: “Artificial intelligence (AI) is the ability of a machine to simulate human intelligent behavior, how to learn, reason and solve problems. Thanks to algorithms and data, machines can perform tasks that normally require human intelligence ». Within these boundaries, with a division between greatly enthusiastic and strongly worried about what appears to be a solution and a problem, today it is very popular in fear of a “algorithms dictatorship”. Apart from the ethical issues, the practical advantages and fears of something that can supervise the human creation, one of the most controversial fields of AI is that of the law, which has among the major European experts the Messina Oreste Pollicino. 49 years old, graduated with praise in his city and trained between Bologna, Bruges and Oxford, is full -blown of constitutional law and artificial intelligence regulation in the Bocconi University of Milan, where he also directs the Master in Law of Technology and Automated Systems. Among other things, in Brussels, the Centro on Digital Constitutionalism and Policy presides in Brussels and is one of the few Italians present in the EU plenary for the first code of conduct for the general use. Author of many publications, including the item “digital power” in the encyclopedia of law, in its most recent book, written with Pietro Dunn, “Artificial intelligence and democracy” (Bocconi University Press), addresses the theme of AI not only as a legal-technician question, but as a test bench for the holding of our constitutional values. He has now been appointed coordinator of the Data Governance committee and compliance (wanted by the school of economic and social policies). Tomorrow 19 June in Taormina, Pollicino will participate in the meeting “American lessons and boundaries between humanity and technology”organized by Taobuk and dedicated to AI. We asked him some questions.
What is it and what is the data governance committee and compliance for?
“It is a technical body, the main purpose of which is to support the implementation of European ACT in Italy (regulation approved in 2024). It is not only a consultative body, but a place of dialogue and technical-scientific comparison, where concrete solutions are developed to encourage the creation of a regulatory and operational ecosystem capable of making the new rules actually actually on AI. The Committee wants to be a bridge between institutions, businesses, academy and civil society ». How does it relate to the institutions and what can it do for them? «It is proposed as a technical interlocutor of reference for national and local institutions. It can offer support in the elaboration of guidelines, operational recommendations and best practices models that allow you to translate the principles of ACT into tools applicable in daily reality. In addition, the Committee favors the dialogue between the public and private sector, helping institutions to understand the needs of companies and to collect the requests of civil society, so as to build inclusive and truly effective policies “.
As a coordinator, what is your task?
“Favoring the creation of what I would call an” architecture of trust “. This means guiding the work of the Committee so that concrete proposals develop for an AI governance based on transparency, accountability (responsibility, NDA) and protection of fundamental rights. The work program provides for the development of co-regulation tools, monitoring the AI ACT impact on the productive and social fabric, and the activation of stable comparison channels with companies, public bodies, academic world and civil society “. In your book she also speaks of “architecture of rights in the society of algorithms”.
How should it be created?
Has something already done in Europe and Italy? “The architecture of rights must rest on three pillars: clear rules, tools of enforcement (actions for compliance with the effective rules) and a widespread culture of responsibility. In Europe, a lot has been done with the GDPR, the regulation for data protection, and now with the ACT, a first global attempt to regulate artificial intelligence in a systemic way. In Italy, the challenge is to implement these rules without limiting themselves to forming them formally, but creating an ecosystem that guarantees its concrete application, also thanks to committees such as the one coordinating and other co-regulation tools “.
With what possibility in the face of a technology, such as that of the AI, which produces new developments every day on a supranational level?
«The possibility exists and depends on the ability of the rules to be at the same time solid in the principles and flexible in the adaptation mechanisms. The ACT adopts a risk -based approach, precisely to ensure that the rules are not overcome by technological developments, but can evolve together with them. On a supranational level, this requires a constant commitment of the European and national institutions in monitoring, in the review of the rules and in the active involvement of all the actors of the ecosystem “.
You also explained that AI puts under stress concepts such as equality, freedom of expression, data protection. Can you be optimistic?
«Yes, you can and must be optimistic. Artificial intelligence represents a challenge, but also an opportunity to strengthen fundamental rights. The law must return to being a generative force, capable not only of reacting to risks, but of orienting technological development in a way consistent with democratic values. The regulatory work already carried out in Europe shows that a responsible and forward -looking approach is possible ».
You wrote that the relationship between new technologies and democratic values is a privileged observatory to investigate the state of health of the rule of law: how are we, are we very sick?
«We cannot call ourselves” sick “in an irreversible sense, but we are under pressure. The expansion of the AI and digital technologies tests the effectiveness of fundamental rights. However, the ability of the institutions to react with innovative regulatory tools such as the GDPR and ACT shows that our system retains important antibodies. The challenge is now applying these rules and strengthening them ».
There is no doubt that the AI, even with the use of images, becomes a tool to create fake news (other topic you are expert) more and more similar to the truth. What legal tools can fight this use?
«On the European level, the Digital Services Act and the Digital Markets Act lay the foundations for greater control of online disinformation. Ai Acts itself intervenes on high -risk systems, which could include those intended for the automatic generation of misleading content. Algorithmic transparency mechanisms are also needed, conduct codes and rapid reporting systems of harmful content, as well as a close collaboration between platforms, public authorities and civil society “.
How can a normal citizen not get irritated by the apparent advantages of the AI and not become a pawn of the world computer systems?
«The first tool is awareness. It is necessary to invest in digital training, since school, so that citizens – and young people in particular – develop critical sense and ability to assess the risks and benefits of the AI. The institutions must promote educational programs and awareness campaigns to make the functioning of algorithmic systems and their possible effects on daily life transparent ».
One last question: who is a constitutionalist at Bocconi, explains what is the role of universities to “govern” the technological innovations positively?
«Universities have a crucial role: they must be places where not only technical skills are formed, but also ethical and legal awareness. They must promote an interdisciplinary dialogue between law, technology, economics and philosophy, to build models of legal culture, openness to dialogue between disciplines and attention to the European context “.