Prof. Avv. Roberto De Vita ‘s speech at the presentation of the Generative Artificial Intelligence Learning and Innovation Hub (GAILIH) Observatory on July 2, 2024, in the session on “The Challenges of Artificial Intelligence and Italian and European Regulation,” moderated by I-com President Dr. Stefano da Empoli.
Stefano da Empoli: I would take advantage of criminal law expertise but not only – you have also been involved in digital for many years – to ask Avv. De Vita what are the profiles of interest of the application of generative artificial intelligence in law enforcement and what risks and problems it poses.
Roberto De Vita: Thank you all. I make first a point, not to attribute to myself an authority that I do not have, lest we precipitate only within the important, but narrow nobility of character I judge in these reflections.
I have been directing Eurispes’ first Italian Cybersecurity Observatory for 13 years, and it has been as many years maybe a few less than that that I have the national chair of security for new technologies for the Guardia di Finanza, at the School of Economic-Financial Police, where what is done is to study the application potential, the application actuality, both bad and good side, as well as the regulatory framework . This introduction to say that the issue of combining the two dimensions is complex, and I’ll answer Stefano da Empoli’s question right away.
The bad guys already make great use of all things digital. Take for example the applications of generative artificial intelligence on the whole world of cryptography. The bad guys use cryptography, actually we all use it, we protect our communications, we keep our storage in confidentiality, so cryptography is part of the relationship tools, of protecting information, whether it’s the bad guys or it’s the good guys.
Good guys have to do what, decryption, that is, we have to make clear what is the content of a bad guy’s message.
Let’s take a trivial example: the Guardia di Finanza, Police or Carabinieri arrive, find a safe, call the Fire Department or the locksmith, open the safe and read the contents of the documents.
The Guardia di Finanza, Police or Carabinieri arrive, find an encryption-protected usb or hard drive, what do they do? Who do they call?
At this point, this trivial example is enough to say that the problem of enforcement of digital innovations is dual, in the area of law enforcement. So the question of “what rule of enforcement?” arises. We know perfectly well that the bad guys do not enforce rules and as a result are more competitive.
Let’s take another example: dynamic child pornography. The creation of child porn content, movies from apparent reality or child porn comics. Through generative artificial intelligence, anything can be created. These are the bad guys. What do the good guys do, they protect themselves and say: but is it better to have animated child porn, which disincentivizes the creation of child porn footage shot in child porn farms, the places of horror, or should this also be punished and sanctioned and consequently place this restriction?
Going further, we plan to regulate the use of artificial intelligence in judicial systems. The new bill[1] has a number of key passages, Articles 14 and 25. Art. 14 clarifies one concept: the use of artificial intelligence systems is possible only for the organization of judicial offices, not for decisions, because they have no room for critical consciousness, nor for the evaluation of evidence.
But is this really happening? Already it is being used, because the trial, especially the criminal trial, has increasingly transformed over time into a digital evidence process. So in an atypical and unregulated way, the formation of digital technical evidence comes in, which is beyond the judge’s control. The judge then what does, a terminal assessment. We would have to have what are the rules about the formation of technical evidence. This may sound like a difficult discussion, but it is very trivial. Let’s take the use of a wiretap, a capturer, any file that contains a voice audio. We have to determine whether it is genuine or not genuine. It goes into the process, how do you determine whether it is genuine? You do a technical consultation and the technical consultants what do they use? Programs. And how are these programs coded? Who is it that says that program is capable of recognizing or not recognizing? So consequently it actually enters not only artificial intelligence but that whole world of unregulated typicality within the criminal process. That we then say that the judge must decide is fine, but the judge decides on the basis of the evidence.
Just imagine, the judge decides however the evaluation of the testimony is an evaluation that can be human is analog, what if the testimony is digital? Keep in mind that even now in courtrooms people think they can make comparisons between images by looking at them, that is, a sensory perception, which is absolutely far, now sidereally, from the criterion of the electronic fingerprint that leaves an image, because if I in a trial bring a photograph where it portrays Roberto De Vita standing in a place or doing a certain thing. They will reason about the origin, they will say “it seems to me,” and so it enters overbearingly, because it is the question of the analog perception of digital evidence and the evidence generated by artificial intelligence that is the problem.
So when we have to imagine that the judge, as well as the trial remains a place where critical consciousness is applied, this must first of all be aware of the limitations of analog culture in relation to technological tools, and this, however, does not come from the normative datum, but from the formative datum.
There is a commitment here and rightly so in the training screenings for younger people. Earlier I heard an important passage from the representative of the Community of Sant’Egidio who talked about the elderly. But we know perfectly well that one is elderly from the moment one leaves the circuit of the working-active-productive-age condition, may be 60-65-70 years old, with a life projection of another 20 years.
We know that training happens in schools in universities or in the workplace, outside there is no more training. Currently older people, 60 years and older, are 80-85 % of the recipients of all the scams that are done through artificial intelligence. You will say and how do they use artificial intelligence. Very simple: the generative dialogue that takes place with a gpt-type chat while you are describing an improbable story that, however, in the hands of those with analog training is absolutely indecipherable.
At the same time, training is about younger people. Who is doing right now in the area of justice, predictive policing, application of artificial intelligence? The older people, we, we are the ones who have to go to school, it’s not the 15-20 year olds. We should take all magistrates, lawyers, policemen, carabinieri, all of them to school, all of them to understand that WhatsApp is not a tool that gives certainty, I show the series of messages, that the content of a cell phone is editable.
We have waged battles to be able to limit the use of trojans, but not to be siding with the bad guys, but because we know that it is a manipulative intrusive capability, like artificial intelligence.
Then the underlying problem is and remains cultural, and the university for digital transformation, for innovation, must first and foremost accommodate those who have power and responsibility in the use and application of even unconscious artificial intelligence right now.
Stefano da Empoli: You referred first to those two articles and dwelt on the first one, tell us something about Art. 25 and what your opinion is on it.
Roberto De Vita: The underlying problem is that you can regulate what then has a quota of compliance with the rule that provides the rule. The underlying problem is unreported abuse, illicit abuse, and in short, what I said before, the use of AI (but digital tools in general) by the bad guys.
The theme of recognizability. Typically a fake does not have a “fake” mark. When I buy a fake product, it doesn’t say “fake” on it. So, I can be protected in foster care when it is a declared result of an artificial intelligence, otherwise it is more difficult. This is as far as direct data is concerned, and I come to Art. 25.
There is the indirect problem. Let’s take an example: you receive a phone call, in which there is a voice, mine, which is very recognizable, telling you bad things. You are convinced that I called. What do you do? Who do you report? A user, with my voice, and you are sure it is Roberto De Vita. The underlying problem was that it was not me, but the AI.
It is not that before the call I declare, “This is the product of AI.” Article 25 addresses this issue, of introducing an aggravating factor to systematize in the substantive criminal system a provision to punish the use of direct or indirect results of AI systems that represent an insidious means with a deceptive capacity that can amplify, make more serious certain crimes, impair public or private defense.
The issue is huge it is not only substantive in nature, I gave the trivial example of the phone call but you can imagine what potential AI has in creating in analog perception the belief that the result is genuine.
There are provisions, now some of them accompany existing regulations. There is a provision regarding the creation of materials, they can typically be videos, movies.
We all know that the theme of nonconsensual pornography-which is the big container of Revenge porn-is the creation of animated movies with likenesses of, indeed exactly with, people in high definition in the act of performing sexual intercourse.
Of course, the rule has a liberal bias: if harm is created or if there is no consent of the person. Let’s remove the consent, look at the harm profile. This is not a harm referable to a single person, here is an issue of timidity of approach that is also used in other jurisdictions.
Anything we fail to recognize as the fruit of generative artificial intelligence is likely to create, especially in analog minds-and they will remain analog minds for a long time, even culturally-a disorientation and difficulty in recognizing what is true from what is not true. The “punitive-sanctions” enthusiasm should be far greater than a phenomenon that is being looked at as a mere forgery. It is not a mere falsification, let us always remember that indirectly the results of manipulation by artificial intelligence enter into an analog decision-making chain made by man who does not recognize and is convinced that he is deciding in an original and first way, but this is not.
Art. 25 tries to give answers, still tentative, we will have to see the disasters that will be generated-and keep in mind that I do not have a Luddite attitude and approach-I am a firm believer in AI. I don’t think regulations are enough-very useful for making businesses or public administration work-as far as protection is concerned an unparalleled cultural investment that we were not able to make even in past years for the first digital revolution. I make a joke, only covid and lockdown made the Courts in Italy equip themselves with a PEC to be able to receive documents. I do not know whether we should expect a plague by image to begin to understand what the consequences of using artificial intelligence are.
References
[1] Bill no. 1146 – Italian Senate. Download the text of the DDL here.
