By courtesy of the publisher Mimesis, we publish the presentation of Piero Martello, director of the online legal magazine Labor Rights Europeof the volume Ai, Law and Justice, from the Ai Act to law 132/2025edited by Marco Biasi and Mariano Sciacca, dedicated to the relationships between Artificial Intelligence and the world of law.

There are many questions that arise for jurists when faced with what for simplicity (simplification?) and convenience (laziness?) is called Artificial Intelligence (AI).
Whatever the meaning and content of AI, the worst approach is that of laziness, due to which this reality is often hastily accepted or uncritically rejected.
Laziness often leads some to prejudicially refuse to use AI; and many others to abandon themselves, even prejudicially, to a fideistic vision of it.
Thus, to quote Umberto Eco, we find ourselves faced with the two categories of the Apocalyptics and the Integrated.
The former, convinced that AI cannot be reconciled with the activity of the jurist, which would be integrally and ontologically connected with the human spirit and speculative activity, therefore not combinable with the automatisms of the algorithm and information technology.
The latter, with the hope that the same technology will be able to relieve them from the effort of pondering and reflection, which has historically been the basis of the hermeneutic activity that characterizes and qualifies the jurist.
There are therefore opposite reactions from enthusiasts, who predict miraculous results; and critics who hypothesize deleterious dehumanizing effects.
What to say to both?
That it is the task of critical thinking to overcome Manichaeistic positions, divergences based only on dogmatic prejudices and uncritical certainties; if not, more simply, on a fear of new things.
And, therefore, make it understood by both sides.
The best way to escape the grip of this double distortion is to gain as much in-depth knowledge as possible of the phenomenon, delving into its complexities and identifying its lights and shadows, limits and potential. Only full knowledge of reality purified by myth can restore serenity to the anguished apocalyptics and reduce the miraculous fideism of integrated enthusiasts.
To pursue this function, the legal journal Labor Rights Europe has established for many years a column entitled the Digital Transition, which has built an information and training path aimed at creating a sort of “literacy” of legal readers, to illustrate the real scope of AI tools, the opportunities and the limits of its effectiveness.
With the aim of making people understand what can be asked of AI and what it cannot give.


Thus, over the years, the Journal has hosted a large number of contributions from experts in law and IT, but also in other areas of knowledge, creating a mosaic of opinions aimed at giving a more precise idea of the multiple profiles that the topic of AI presents to jurists.
All the articles that are now reproduced in this volume were published within and within this column. The added value of which lies in the fact that the presentation of the multiple points of view in a single context will facilitate the reader’s perception of the breadth and depth of the profiles involved and help him avoid prejudicial rejection or uncritical acceptance.
Of course, the reflection on these themes will not end with this Volume; and the debate, discussions, arguments and, probably, mutual anathemas will continue for a long time to come.
If nothing else, for the fact that the evolution of AI will proceed at the accelerated pace we are experiencing, providing new perspectives and posing new questions, presenting updated solutions and imposing adequate protections.
This is an area with ever-changing boundaries due to the constant evolution of technology, which is followed by a horizon that is increasingly receding. And which, with increasingly pressing periodicity, proposes new challenges and growing questions not only to jurists but also to all those who – for reasons of a different nature – are called to deal with multiple social realities: collective subjects, professionals and consultants, private and institutional decision-makers.
In the knowledge that, if they do not deal with Artificial Intelligence, it will still deal with the realities entrusted to them, without asking for permission.
It is common experience that Artificial Intelligence has become part of our lives (professional and otherwise) without asking our opinion.
And, therefore, evolutionary processes are either managed or subjected to.
Management requires, first of all, that we try to master the IT medium and avoid delegating tasks to it that are foreign to it.
There have been, and there are, jurists who delegate the most typical function of their role, that of interpretation, to AI. The delegation often reaches the extreme point of irresponsibility of signing a document of the case without even having read what the AI has written, to the point of citing sentences that do not exist.
These cases of “computer hallucination” have constituted a veritable museum of horrors which has led to severe rulings in court.
A singular “evaluative abdication” has occurred which mortifies the typical function of the jurist and which could lead to the cancellation of his role.
The example of “predictive justice” applies to everyone, a sector that most directly concerns jurisdiction, in relation to both effectiveness and efficiency and the methods of exercising it and the risks associated with a reckless approach.
On this topic, expectations have been formed, sometimes naively optimistic and exaggerated, sometimes apocalyptic predictions. But there is no doubt that in this sector realistic beneficial prospects can be developed, which it is better to know in their actual scope and usefulness. The prognosis will necessarily be provisional because it is likely to be overcome and surpassed in the short term by the whirlwind and incessant development of technology.
The profitable, not “hallucinated” use of AI requires an increased level of competence and knowledge, since only in this way will it be possible to ask the computer interlocutor the right questions that will allow the machine to provide congruent answers.
The quality of the AI’s answers depends on the quality of the questions it is asked.
This is why the relationship with AI requires a high level of human competence.
The great data processing capacity available to AI is not sufficient to enable it to overcome what has appropriately been called “cognitive blindness”, which prevents it from grasping cause-effect relationships; just as it does not allow it to go beyond the mere reconstruction of the existing discipline and to arrive at an evolutionary reading of it in order to build new and advanced solutions.
This result is and remains (in the current state of technology) the task of Man and his “Natural Intelligence”, of which legal competence but also balance, sensitivity, emotions, feelings and ethics are part.
Only the presence of “Natural Intelligence” will be able to prevent the “opacity of the algorithm” from leading to the violation of the rights of the person or, at least, to discriminatory effects.
Ultimately, the fundamental difference between technology and the Human lies in the fact that the machine chooses, man decides.
Due to a sort of heterogenesis of purposes, a reversal of the scheme is achieved by which AI accentuates the degree of competence needed to extract valuable material from the immense deposit of data and avoid waste, i.e. useless contents which are a source of cognitive distortions and IT hallucinations.
Therefore, a (only apparent) paradox arises according to which it will be increasingly necessary to know that to use AI well, a more robust dose of natural, human intelligence is needed (the real one, the only one that has the right to be called “intelligence”).
One without the other is blind; the second, without the first, is deaf.










