Synthetic truth is devouring investigations and criminal proceedings

Once, the epistemological relationship between truth and procedural truth was debated. The limits of the latter were accepted because it was destined functionally to regulate relations among members of society in the balancing between certainty and order. The truth produced by artificial intelligence is constant creative and perceptual falsification, which also modifies people’s autobiographical memory and the materiality of the “objective,” and alters the evaluation of evidence, thus rendering the category of procedural truth no longer part of reality, of truth, and of its knowability. The leviathan has become a demiurge. Precisely at the moment when digital evidence becomes the direct or indirect center of gravity of every criminal investigation (because man is part of a digital fabric with ever tighter meshes), crimes linked to generative AI also condition and suborn analog evidence. What we are witnessing (for the most part unconsciously) with crimes linked to AI and, in particular, deepfakes, phishing, and sextortion, is only a synecdoche of the crisis of evidence in the age of synthetic content.

Generative artificial intelligence has not only expanded the ability to produce content, but has undermined – perhaps definitively – one of the most fragile assumptions of digital coexistence: trust in the authenticity of what we see, hear, and read. Deepfakes, advanced phishing, sextortion, and artificial pornography show how deception no longer depends only on the falsity of the content, but on its plausibility, its extremely rapid dissemination, and its capacity to strike emotions, reputation, assets, and self-determination. In this scenario, the threat concerns not only the commission of new crimes or the enhancement of traditional ones, but also directly affects the solidity of audiovisual evidence, the credibility of digital relationships, the resilience of the procedural system, and the effectiveness of punitive responses. Generative AI, in fact, has not only multiplied the capacity to produce content, but has made it highly credible, difficult to verify, virtually free, and within everyone’s reach. The point is no longer to establish whether artificial intelligence can be used to commit crimes, but to understand how it is transforming the ways in which they are committed.

Statistics on crimes specifically linked to generative AI are still incomplete, because the main databases classify the facts by type – fraud, extortion, sexual exploitation, informational manipulation – and not yet by the technology used. Yet the available data are now too convergent to be dismissed as passing signs of change. Europol, in SOCTA 2025, describes AI as a factor transforming organized crime, a structural multiplier capable of increasing its efficiency, scale, and adaptability [1]. In the report Threat Landscape 2025, ENISA notes that cybercriminals are increasingly exploiting artificial intelligence to increase productivity and operational capabilities, and confirms that phishing remains the most widespread means of compromise, accounting for about 60% of the cases observed [2]. In the 2024 Internet Crime Report, the FBI recorded 859,532 complaints and 16.6 billion dollars in losses; phishing through “spoofing” alone [3] (a cyberattack using false identities) accounts for 193,407 reports, while “Business Email Compromise” generates losses of more than 2.77 billion dollars [4].

The picture becomes even more alarming with respect to crimes affecting minors. The National Center for Missing & Exploited Children (NCMEC) reports that in 2024 the CyberTipline [5] received 20.5 million reports, corresponding to 29.2 million distinct incidents; reports of “online enticement” (online solicitation), a category that includes sextortion, exceed 546,000, with an increase of 192% compared with 2023 [6]. In the same year, reports involving the use of Generative AI grew by 1,325%, rising from 4,700 to 67,000 [7]. In 2024, finally, the Internet Watch Foundation identified 245 reports containing AI-generated images of child sexual abuse, an increase of 380% over the previous year [8].

1. Gen AI: an overview

Generative artificial intelligence (or generative AI) is a branch of AI that enables computer systems to create new original content, such as texts, images, music, videos, or code, starting from existing training data. Unlike traditional AI models, designed to classify or recognize data, generative systems are capable of producing new content, which often proves indistinguishable from that created by human beings.

The functioning of generative AI is based on deep neural networks (deep neural networks), in particular on an architecture called the transformer (transformer), introduced in 2017 by a group of Google researchers [9]. This architecture underlies many advanced language models such as OpenAI’s GPT (Generative Pre-trained Transformer), Google’s PaLM, Anthropic’s Claude, or Meta’s LLaMA.

Generative models are usually pre-trained on enormous amounts of data, often taken from the web, and subsequently optimized to perform specific tasks (for example automated writing, image generation, or code creation). During training, the model learns the statistical and semantic structures of language or visual data, so that it can reproduce them in a credible way.

A very well-known example of generative AI is ChatGPT, developed by OpenAI, which can sustain complex conversations, write essays, answer technical questions, and even imitate literary styles. Other tools such as DALL-E or Midjourney generate realistic images starting from a simple textual description (prompt), while others, such as Synthesia, allow the creation of videos starting from linguistic inputs.

From a technical point of view, the core of generation consists in predictive autocompletion: the model calculates the probability that one word (or one pixel, or one musical note) will follow another, and generates coherent sequences on the basis of this calculation. For language models, for example, this happens word by word or token by token, choosing those that are statistically most plausible in relation to the context provided by the user.

As regards deepfakes, this technology often implies the use of a “neural network” for machine learning [10]. The neural network begins as a sort of tabula rasa, characterized by a nodal network controlled by a set of numerical parameters set at random [11]. Just as experience refines the brain’s neural nodes, examples train the neural network system [12]. If the network processes a wide range of training examples, it is then able to create increasingly accurate models [13]. It is through this process that neural networks classify audio, video, or images and generate realistic imitations or alterations.

In itself, the emergence of machine learning through neural-network-based methods heralds a significant increase in the ability to create false images, videos, and audio. But the story does not end there. “Generative adversarial networks,” GANs (Generative Adversarial Networks) enter the scene [14]. The latter, invented by Google researcher Ian Goodfellow, use two neural networks simultaneously: one, called the generator, draws from a dataset to produce a sample that imitates such data; the other, the discriminator, evaluates how successful the generator has been. Iteratively, the discriminator’s evaluations refine the generator’s work. The result far surpasses the speed, scope, and subtlety that human reviewers could achieve, so that the growing sophistication of the GAN approach will inevitably lead to the production of increasingly convincing deepfakes [15].

2. From the generation of content to the generation of deception

GenAI is often described as a “creative” technology. On the legal plane, however, its most relevant quality is another: the capacity to produce plausible outputs, adaptable to context and difficult to distinguish from human ones. Contemporary generative models descend from the evolution of deep-learning architectures, in particular transformers, introduced in 2017, which made it possible to process enormous sequences of textual, visual, and audiovisual data much more efficiently than previous models [16]. The OECD has observed that GenAI imposes on governments a new challenge: governing a technology capable of transforming information, markets, and social relations without sacrificing fundamental rights, reliability, and public trust [17].

If in the past the sophisticated manipulation of images, audio, and video remained largely confined to highly specialized actors, today commercial tools, often intuitive and relatively inexpensive, allow a much wider audience to produce synthetic content that is more or less dangerous. It is here that the issue ceases to be exclusively technological and enters real life, generating not only entirely new criminal offenses, but also giving a different offensive power to already known crimes such as fraud, extortion, defamation, identity substitution, and evidentiary manipulation, altering the assumptions of the credibility of deception [18].

An example that has by now become paradigmatic is the University of Washington experiment on Barack Obama’s face. Some researchers, using a neural-network-based tool they had created, showed how, given sufficient audiovisual material, it was possible to produce a video in which the former president uttered phrases he had never said [19]. That case had a demonstrative function. Today, however, the problem is no longer to establish whether this is possible, but to recognize that the same logic can now be used in romance scams, false statements, blackmail videos, voice clones, or pseudo-evidence. In other words, GenAI does not merely generate content: it creates plausible and credible situations [20].

3. Persuasive criminality: phishing, voice cloning, and deepfake fraud

It can happen on an international and serious organised crime scale, and it can happen in someone’s bedroom … You can think of any crime type and put it through an AI lens and say: “What is the opportunity here?” (Alex Murray, UK National Crime Agency) [21]

The first field in which GenAI has shown in a concrete way its criminogenic capacity is that of persuasive fraud. Phishing is a form of social engineering in which the perpetrator pretends to be a trusted contact and induces the victim to share sensitive information, such as passwords or banking data [22]. Traditional phishing was often recognizable because of grammatical errors, standardized formulas, and clumsy registers. GenAI reduces precisely these signals: it produces correct texts, imitates communicative styles, constructs plausible digital identities, personalizes messages for specific recipients, and imitates writing styles while circumventing controls, exploiting the vulnerabilities of email filters and users’ habits. The effect is not only quantitative, but qualitative: the false appears more credible because it increasingly resembles ordinary communication [23].

The Anti-Phishing Working Group (APWG) observed 1,130,393 attacks in the second quarter of 2025, the highest quarterly value since mid-2023 [24]; in the fourth quarter, moreover, the main targets were social media services and SaaS/Webmail, each accounting for 20.3% of the attacks observed, while smishing (phishing via SMS) continued to grow by 30-40% quarter by quarter [25]. In addition, ENISA notes that at the beginning of 2025 AI-supported phishing campaigns already represented more than 80% of the social-engineering activity observed [26]. Microsoft adds that AI-automated phishing emails reach click rates of 54%, as against 12% for standard attempts [27]. In the same direction is the FBI, according to which in the first seven months of 2025 AI appears in more than 9,000 reports, distributed across different forms of online fraud [28].

The FBI warned that criminals use GenAI to facilitate financial fraud on a larger scale, exploiting artificial images, convincing texts, cloned audio, and synthetic videos to make the deception more plausible [29]. The FTC, for its part, reported the growing risk posed by voice cloning systems, capable of reproducing the voice of a family member or a hierarchical superior and triggering an immediate emotional response in the victim [30]. The interesting element, from a legal point of view, is that AI does not intervene only in the final act of the fraud, but already in the construction of the context of trust: it creates the apparent relationship that induces the victim to believe, react impulsively, and lower their defenses.

It is therefore not surprising that the most recent examples are less and less “home-made” and more and more organized. In February 2025, the Hong Kong Police announced that it had dismantled two criminal organizations that used deepfakes and synthetic identities for online fraud, with 58 arrests and total estimated losses of about 400 million Hong Kong dollars [31]. But there are not only economic frauds in the classic sense. In 2025, the FBI issued an alert on so-called “virtual kidnapping,” in which criminals use altered images or videos as false “proof of life” in order to make extortion demands for non-existent kidnappings more credible [32]. The leap here is evident: GenAI makes it easier to simulate not only a message, but an entire emotional scene [33].

In this sense, generative phishing represents something more than a simple technological updating of social engineering. It is the emergence of a persuasive criminality, in which false content matters not so much for what it says, as for its capacity to insert itself into an already plausible relationship. It is precisely this element that explains why the response of the law cannot be exhausted in the repression of the completed act alone: when deception is automated, personalized, and scalable, prevention and transparency become part of the legal response as much as sanction [34].

4. Sextortion, synthetic pornography, and non-consensual artificial sexuality

If fraud shows the most economically visible side of GenAI, its most disturbing impact emerges in crimes that affect the sexuality and vulnerability of persons. Sextortion, already known before the explosion of generative AI, changes nature when the perpetrator does not merely threaten the dissemination of real material, but can create or alter sexually explicit content starting from innocent photographs, images found online, or simple identifying data. In this case, the threat no longer concerns only what exists, but what can be made credible in a few minutes [35].

This transformation is not exhausted, however, in the individual extortionate conduct or in the manipulation of a piece of content. As Internazionale [36], recently observed, drawing on an analysis by The Economist, artificial intelligence is reshaping the entire economy of digital pornography: from generators of explicit images to nudify apps, all the way to platforms that must decide whether to admit, monetize, or at least label synthetic content that appears authentic.

In this context, sextortion, pornographic deepfakes, and exploitation of another person’s image no longer appear as episodic deviations, but as the pervasive and exponentially growing scenario that is reshaping the very boundaries of pornography, of its enjoyment, and of the imagery connected to it, with disturbing and potential normalizations of illicit deviance in real life, especially as regards minors.

English NCMEC data, for example, confirm the scale of the phenomenon. In 2024, reports of “online enticement” (online solicitation) exceed 546,000, with growth of 192% compared with the previous year; the same organization reports receiving almost 100 reports a day of “financial sextortion” (that is, extortionate conduct carried out through the internet, characterized by the threat to disseminate sexually explicit images or videos depicting the victim, in order to obtain something from the latter), and being aware, since 2021, of at least 36 adolescents who took their own lives after having been victims of sextortion [37].

Moreover, 2024 closed with some striking judicial cases in the field of artificial intelligence. In particular, a twenty-seven-year-old from Bolton (UK), Hugh Nelson, was sentenced to 18 years’ imprisonment for having created, with artificial intelligence, images of child abuse, using as a basis photographs of minors who really existed [38]. It was the first trial of this kind in England.

Nelson had used Daz 3D, a computer program with an artificial intelligence function, to turn “normal” images into abuse images. In some cases, pedophiles had commissioned the images, directly providing photographs of minors with whom they had had contacts in real life.

From selling the images in various online chats he earned about five thousand pounds over a period of 18 months.

Sebbene ci siano state precedenti condanne per deepfake (nei quali spesso si sovrappone un volto alle azioni commesse da un altro corpo) la novità di questo caso è rappresentata proprio dalla creazione integrale di personaggi 3D da fotografie innocue[39].

In this direction, the Internet Watch Foundation reported in 2024 an increase of 380% in reports relating to AI-generated images of child sexual abuse [40].

Il danno non consiste soltanto nell’eventuale acquisizione abusiva di immagini autentiche, ma nella costruzione artificiale di una sessualità artefatta e non voluta, nella mercificazione del corpo rappresentato, nella minaccia reputazionale e nella capacità del contenuto artificiale di circolare come se fosse vero. È per questo che il dibattito normativo si sta spostando verso categorie più ampie, come quella dei contenuti intimi non consensuali, idonee a ricomprendere anche immagini e video generati o alterati artificialmente[41].

The growth in recourse to AI-generated images to satisfy the black market of abusers carries with it a challenge to the resilience of the relevant rules, conceived in a completely different era. Increasing relevance is therefore assumed by those rulings of the Italian Court of Cassation that, for some time now, have begun to classify comics and illustrations of erotic stories depicting minors as child pornography material, considering that everything capable of giving the viewer the idea that the object of the pornographic representation is a minor must be included within that notion [42].

5. The difficulties of recognition

The different types of deepfakes, from the simplest ones, such as face-swapping, to lip-syncing and puppet-mastery (respectively the exchange of faces, the synchronization of a video of a person speaking with a different audio track, and the animation of a non-existent subject), can be as complex to create realistically as they are difficult to distinguish from real images.

Although over time, in step with the evolution of generative technology, considerable progress has been made in the performance of deepfake detectors, there still exist significant limits that prevent complete and effective identification.

First of all, as always happens with AI technologies that need to learn in order to improve, access to large databases of deepfakes is a determining factor for the development of effective detection techniques. However, by analyzing the quality of the videos present in these datasets, various ambiguities emerge in relation to the manipulated content actually present on the Internet. The most common types in these databases are temporal flickering at some moments during speech, blurring around facial regions, excessive smoothness in facial texture or a lack of detail, the absence of head movements or rotations, the absence of objects obstructing the face (glasses, lighting effects, and the like), and sensitivity to variations in posture or gaze, for example with inconsistencies in skin color. The ambiguities mentioned derive from imperfections in the steps of the manipulation techniques. Moreover, low-quality content is hardly convincing or capable of creating a real impression. Therefore, even if detection methods prove effective on such videos, it is not guaranteed that they will maintain the same performance in real situations, with photos and videos created with more advanced techniques.

Moreover, deepfake detection methods are formulated as a binary classification problem, in which each sample is labeled as true or false. This type of classification is simpler in controlled environments, where techniques are developed and tested using audiovisual content whose origin is known – original or falsified. However, in field application, videos can be altered in ways other than deepfakes, so content not detected as manipulated is not necessarily authentic (e.g. Photoshop). In addition, a deepfake content item may present more than one type of alteration, both audio and visual, making a single label inaccurate. In content with multiple faces, usually one or more of them are manipulated by deepfake only in some sequences. Therefore, the binary classification scheme should be expanded to multi-category/multi-label classifications and to local classification/detection at the frame level, in order to make the analysis more effective.

The most elementary deepfake detection methods, therefore, are generally designed for batch analysis on large datasets. However, when such techniques are employed by journalists or law enforcement agencies, only a limited number of videos is often available. Moreover, a numerical score representing the probability that content is real or false has little value if it is not accompanied by an explanation justifying that score, but most detection methods, especially those based on deep-learning techniques, do not offer such explanations because of their “black-box” nature, that is, a black box whose functioning cannot be known from the outside [43].

6. Deepfakes and justice: the crisis of audiovisual evidence

A complete legal reflection on the subject must necessarily grapple with the impacts of Generative AI on the relationship between truth and procedural ascertainment: the effects of deepfakes, in fact, are not limited only to the dissemination of the false, but are such as to undermine trust in the true. Audiovisual evidence has traditionally benefited, in many contexts, from a sort of practical reliance: a video “shows,” a photograph “documents,” an audio recording “records.” Deepfakes overturn these assumptions and undermine precisely this basic trust, giving rise to one of the greatest threats of contemporary society [44].

The difficulty, however, is not only cultural but also technical. The structural limits of detection systems are known: datasets not fully representative, overly rigid binary classification, poor transparency of the models, vulnerability to race and gender bias, difficulty in dealing with multimodal content and local alterations. The forensic evaluation of deepfakes, moreover, must today confront the use of anti-forensics techniques [45], used precisely to make forensic reconstruction difficult and aimed at concealing the elements useful for ascertaining falsifications. For example, an AI-content detection tool may prove much less reliable when the same contents have been uploaded, compressed, reconverted, and disseminated through social platforms [46].

Social platforms such as Twitter, Facebook, or Instagram, which are the main online channels used to disseminate audiovisual content, in order to save network bandwidth strip such content of metadata, subject it to downsampling (a process of reducing definition), and compress it significantly before upload. These manipulations, commonly known as social media laundering, remove clues relating to any underlying falsifications and increase the rate of false positives in detections. Detection methods based on signal-level key points are particularly vulnerable to this kind of distortion. An effective measure to improve the accuracy of detectors on content “cleaned” by social media is to include such effects in the training data, while at the same time expanding evaluation databases with visual content manipulated through social networks.

7. The entry of deepfakes into justice

Among the concerns about the malicious use of deepfakes, the possible role of this technology in manipulating the formation of evidence in trials, criminal and otherwise, stands out. As early as 2021, a European Commission study observed that deepfakes raised “serious concerns regarding the fundamental credibility and admissibility of audiovisual recordings as electronic evidence in courts” [47].

Indeed, the first consequence in public opinion of the widespread use of deepfakes is the erosion of trust in institutions and in the concept of truth, with evident damage to the credibility of the judicial system.

This technology, in fact, forces the viewer to reassess the level of trust placed in the videos and images with which they interact, particularly in the judicial field, precisely because they may appear extremely realistic but in reality be entirely false; and even when the viewer is aware of the falsity of the content, perception may nonetheless influence the subconscious. Their very existence undermines the “intrinsic” trust that is normally attributed to video and photographic evidence [48].

In most legal systems, moreover, the regulatory vacuum deriving from the novelty of the subject makes it difficult to choose under which criminal offense to frame the use of deepfakes. Abroad, it has been suggested that recourse be had to civil and criminal disciplines concerning copyright [49], which is, however, often limiting. On the contrary, the widespread introduction of new specific disciplines would be preferable, following (in the EU) the path already traced by the AI Act, as will be seen later [50].

The most critical aspect, in any case, concerns the value normally attributed to documentary evidence such as images and videos[51], often admitted without any kind of computer-based verification of their genuineness, provenance, and originality. Sometimes, the only guarantee of provenance is the mere integrity of the chain of custody of the files and devices in which they are contained, which, however, provides no guarantee at all of the genuineness of the content ab origine[52]. This “intrinsic” trust is further demonstrated by the fact that – in most legal systems – these examinations are carried out outside the courtroom, before the trial, usually during the investigations[53].

It is here that the technology produces perhaps its deepest effect on the law: the crisis of audiovisual evidence consists not only in the risk that a falsehood enter the proceedings, but in the fact that even the true ceases to appear immediately reliable. The damage, in other words, is not only epistemic, but institutional [54].

Let us consider, for example, one of the forms of evidence regarded as most significant in investigations and judicial proceedings: interceptions, whether they are “traditional” telephone wiretaps, environmental recordings, or those carried out through spyware. At present, their truthfulness and technical reliability are assessed on the basis of the proper method of using the tool, but the originality of the content is not examined from a technical and IT perspective. If we add to this shortcoming the growing spread of increasingly realistic and less detectable voice deepfakes, the result is a dialogue that has been correctly obtained and assessed procedurally, yet may in fact contain two or more voices, some of which have been generated by artificial intelligence. Interceptions, therefore, risk conferring certainty—with a formally correct method—on elements that are actually the product of artificial manipulation.

8. The AI Act: not an AI criminal code, but a European perimeter of unlawfulness

In this scenario, European law and Italian law are beginning to react not only with classical categories, but also with transparency obligations, removal tools, supervisory powers, and new targeted offenses [55].

The AI Act implements a governance of risk, transparency, and control of the technical supply chain. Precisely for this reason, however, it is also central for those dealing with crimes linked to artificial intelligence: because it defines the European perimeter of what the legislator considers intolerable or, at the very least, in need of reinforced safeguards [56].

On the one hand, Article 5 prohibits manipulative, exploitative, or incompatible practices with fundamental rights, including certain particularly invasive uses or uses harmful to dignity and individual autonomy [57]. Its value is above all systemic: it clarifies that the Union does not look at AI as a neutral instrument, but as a technology capable of amplifying deception, domination, and exploitation.

On the other hand, Article 50 is the provision most directly connected to deepfakes and synthetic content. The regulation imposes transparency obligations on content generated or artificially manipulated, requiring that deepfakes and certain textual contents disseminated to the public be declared as such, save for specific exceptions, for example for activities of prevention or repression of crimes. The cultural and legal significance of the rule is considerable: the Union does not prohibit synthetic content as such, but considers it unacceptable that it should circulate as authentic without the recipient being placed in a position to recognize its artificial nature [58].

Against this background, the debate, now open at the European level, on Article 4 of the AI Act concerning AI Literacy also becomes relevant: “Providers and deployers of AI systems shall take measures to ensure, to the greatest extent possible, a sufficient level of AI literacy of their staff and of any other person dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, as well as the context in which the AI systems are to be used, and taking into account the persons or groups of persons on whom the AI systems are to be used.” In the phenomena considered here – generative phishing, deepfakes, sextortion, and pseudo-evidence – the vulnerability of the legal order also arises from a cognitive asymmetry: between those who, on the one hand, generate the content and those who, on the other, receive it, disseminate it, or are called upon to assess it. It is therefore significant that, in the discussion initiated with the so-called Digital Omnibus Package [59], there is debate as to whether providers and deployers should remain under an active obligation to adopt measures or whether that obligation should be downgraded to mere encouragement [60].

Even more relevant is the content of Articles 53 and 55, devoted to general-purpose AI models and to those with systemic risk. Here the Regulation moves beyond the logic of the individual malicious user and looks at the organizational responsibility of those who develop and place powerful models into circulation: technical documentation, information for those who integrate the model, copyright policy, public summary of training data, evaluation and mitigation of systemic risks, incident reporting, and cybersecurity obligations [61]. In other words, the AI Act shifts attention upstream of the criminal act, toward the design and distribution of the infrastructure that makes certain forms of offense possible [62].

9. The DSA: from content transparency to platform responsibility

If the AI Act mainly safeguards the transparency of non-authentic content and the governance of the technical supply chain, the Digital Services Act [63] intervenes on the different plane of the circulation of illegal content on platforms. And it is precisely this profile that recent cases, such as that of Grok integrated into X, make particularly evident: the harm derives not only from the generation of the deepfake, but from its immediate publication, from algorithmic visibility, and from the speed of its propagation. In this perspective, the “notice and action” mechanism under Article 16 DSA, the removal orders provided for by Article 9, the role of trusted flaggers under Article 22, and, for Very Large Online Platforms, the obligations to assess and mitigate systemic risks under Articles 34 and 35 become relevant. It follows that, in the context of sexual deepfakes and non-consensual intimate images, protection cannot be entrusted only to the subsequent criminal sanction or to the transparency obligations of the provider of the model, but must also be articulated through rapid procedures of notice, removal, disabling of access, and containment of dissemination.

10. La legge italiana n. 132/2025: dal raccordo regolatorio alla risposta penale

In Italy, Law 23 September 2025, No. 132 does not merely “follow” the AI Act. It adopts it as a framework, but takes a further step, grafting into domestic law principles, authorities, delegations, and some criminal-law choices of considerable importance. Art. 3 establishes that the development and use of general-purpose AI systems and models must take place in compliance with fundamental rights, transparency, proportionality, security, data protection, accuracy, non-discrimination, monitoring, and human intervention; the same article further requires that the use of AI should not prejudice democratic debate through unlawful interference and that cybersecurity be ensured throughout the entire life cycle of systems and models [64]. Art. Article 4 adds, for the information sector, the principles of objectivity, completeness, impartiality, and fairness, in addition to transparency in the processing of personal data [65]. Taken together, these provisions already provide a robust interpretative key for phenomena such as political deepfakes, informational manipulation, and impersonation.

Art. Article 15 then intervenes directly on judicial activity, establishing that, even in cases involving the use of AI systems, every decision on the interpretation and application of the law, on the assessment of facts and evidence, and on the adoption of measures always remains reserved to the magistrate [66]. It is a clause of value not only symbolic, but systemic: it recognizes that AI may enter the organization of justice, but cannot replace human judgment at the most delicate point, that of ascertainment. Articles 20 and 24, moreover, designate the Agenzia per l’Italia Digitale (AgID) and the Agenzia per la Cybersicurezza Nazionale (ACN) as national authorities for artificial intelligence, attributing to them specific powers of supervision, inspection, and sanction, as well as the task of coordinating the national legal order with the European system, including the application of the measures and administrative sanctions referred to in Article 99 AI Act[67]. Article 24, moreover, delegates the Government to regulate cases of unlawful creation and use of AI systems, providing also for precautionary tools to inhibit dissemination and remove content generated unlawfully, as well as the possibility of introducing autonomous criminal offenses [68].

But perhaps the most significant datum is that the Italian law does not stop at future delegation. With Article 26 it already intervenes on the criminal code, introducing (in Article 61 no. 11-undecies) a common aggravating circumstance for crimes committed through AI systems when these have constituted an insidious means or have hindered the defense, and above all inserting the new Article 612-quater of the criminal code, entitled “Illecita diffusione di contenuti generati o alterati con sistemi di intelligenza artificiale” [69]. The offense punishes anyone who causes unjust harm by disseminating, without consent, images, videos, or voices falsified or altered by means of AI and capable of misleading as to their genuineness. This is a passage of great importance: for the first time in an explicit way, the Italian legislator recognizes that harm caused by synthetic content is not only a technological variant of falsehood, but an autonomous offense against the person, reputation, and self-determination.

The new criminal offense, moreover, does not exhaust the framework of remedies. Other offenses such as defamation, identity substitution, and unlawful processing of data remain applicable, where their conditions are met; but, above all, the victim’s effective protection depends on the capacity to coordinate the criminal-law response with the European tools for removal and containment of circulation. In cases in which manipulated images, videos, or voices are conveyed through large online platforms, protection does not involve only the ascertainment of the responsibility of the material perpetrator, but also the timely activation of procedures for notice, disabling of access, and mitigation of systemic risk. The response to sexual deepfakes thus appears not only repressive but increasingly multilevel: criminal, regulatory, and procedural, distributed among the author of the offense, the provider of the system, and the platform that makes its propagation possible.

However, the regulatory intervention still appears partial and with wide margins for improvement. In fact, it limits itself to considering conduct involving the creation and dissemination of material produced with artificial intelligence, without contemplating the further possibilities that the latter offers. Indeed, further criminal conduct could be integrated by the mere use of AI as a research tool, on the one hand because it enables the processing of enormous quantities of data for cataloguing purposes, including online content in ways that go beyond the lawful (for example, the profiling capacities regarding social-media users in order to determine vulnerability to scams). On the other hand, it does not take into consideration the possibility that publicly accessible AIs may be trained by users with false information, in order to condition their responses to future users.

With GenAI, therefore, there is no longer only a distinction between true and false, but between credible contents and verifiable contents. Fraud becomes more persuasive, sextortion more devastating, audiovisual evidence more fragile. In this scenario, the AI Act constructs the first great European perimeter of transparency, governance, and control of models.

The law, however, cannot limit itself to chasing technology, especially when its evolution follows exponential accelerations. It must learn to prevent deception where it today truly takes shape: in the design of content, in its dissemination, in its capacity to appear authentic, and in its fitness to produce harm even before it can be disproved. It is here that the true challenge of crimes linked to generative AI lies: not in the fascination of the perfect falsehood, but in the resilience of the legal order in the face of contents that, more and more often, ask to be believed even before they are verified.

Prof. Avv. Roberto De Vita
Avv. Marco Della Bruna
Avv. Giada Caprini

 

 

References

[1] Europol, EU SOCTA 2025, <https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf>, according to which AI and new technologies act both as catalysts of crime and as multipliers of operational efficiency.

[2] ENISA, Threat Landscape 2025,

<https://www.enisa.europa.eu/sites/default/files/2026-01/ENISA%20Threat%20Landscape%202025_v1.2.pdf>

[3] https://www.giustizia.it/giustizia/it/mg_2_5_12_1.page?contentId=GLM1144621#

[4] FBI, 2024 Internet Crime Report, <https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf>: 859,532 complaints and 16.6 billion dollars in losses; phishing/spoofing with 193,407 reports; Business Email Compromise with losses exceeding 2.77 billion dollars, and <https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf>

[5] <https://www.ncmec.org/gethelpnow/cybertipline/cybertiplinedata>

[6] NCMEC, CyberTipline Data 2024, <https://www.ncmec.org/gethelpnow/cybertipline/cybertiplinedata>: 20.5 million reports, equivalent to 29.2 million distinct incidents; more than 546,000 cases of online enticement, with +192% compared with 2023.

[7] NCMEC, 2024 in Numbers, <https://www.ncmec.org/blog/2025/ncmec-releases-new-data-2024-in-numbers>: an increase of 1,325% in reports involving Generative AI, almost 100 reports a day of financial sextortion, and at least 36 adolescents dead by suicide since 2021 after sextortion.

[8] Internet Watch Foundation, Annual Data & Insights 2024: <https://www.iwf.org.uk/annual-data-insights-report-2024/>: 245 reports relating to AI-generated images of child sexual abuse, with an increase of 380% compared with 2023; <https://www.iwf.org.uk/annual-data-insights-report-2024/data-and-insights/ai-generated-child-sexual-abuse/> and <https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/>

[9] Vaswani et al., Attention is all you need, 2017, https://arxiv.org/abs/1706.03762.

[10] Danielle K. Citron & Robert Chesney, Deep Fakes: A Looming Challenge for Privacy , Democracy , and National Security , in 107 California Law Review 1753 (2019), <https://scholarship.law.bu.edu/faculty_scholarship/640>

[11] Larry Hardesty, Explained: Neural Networks, MIT NEWS (Apr. 14, 2017), <http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414> [<https://perma.cc/VTA6-4Z2D>].

[12] Natalie Wolchover, New Theory Cracks Open the Black Box of Deep Neural Networks, WIRED (Oct. 8, 2017), <https://www.wired.com/story/new-theory-deep-learningimages> [https://perma.cc/9A8L-LFTQ].

[https://perma.cc/UEL5-69ND].

[13] Will Knight, Meet the Fake Celebrities Dreamed Up By AI, MIT TECH. REV. (Oct. 31, 2017), https://www.technologyreview.com/the-download/609290/meet-the-fake-celebrities-dreamed-up-by-ai [https://perma.cc/D3A3-JFY4].

[14] Ian J. Goodfellow et al., Generative Adversarial Nets (June 10, 2014) (Neural Information Processing Systems conference paper), <https://arxiv.org/abs/1406.2661> [https://perma.cc/97SH-H7DD] (introducing the GAN approach)

[15] Cade Metz & Keith Collins, How an A.I. ‘Cat-and-Mouse Game’ Generates Believable Fake Photos, N.Y. TIMES (Jan. 2, 2018), <https://www.nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html>

[16] Ashish Vaswani e al., Attention Is All You Need (NeurIPS 2017),

<https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf>

[17] OECD, Initial policy considerations for generative artificial intelligence (2023), <https://www.oecd.org/content/dam/oecd/en/publications/reports/2023/09/initial-policy-considerations-for-generative-artificial-intelligence_1a9ab450/fae2d1e6-en.pdf> e <https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning>

[18] <https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf>

[19] <https://www.washington.edu/news/2017/07/11/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video/>

[20] <https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning>

[21] Alex Murray, National Crime Agency, UK, https://www.theguardian.com/technology/2024/nov/24/ai-increasingly-used-for-sextortion-scams-and-child-abuse-says-senior-uk-police-chief.

[22] K. Martin, AI Cyber Attack Statistics 2025, TechAdvisors, 27 maggio 2025, <https://tech-adv.com/blog/ai-cyber-attack-statistics/#:~:text=There%20were%2019%25%20more%20deepfake,a%202%2C137%25%20increase%20from%202022>

[23] On the qualitative leap of generative phishing and on AI’s growing capacity to make fraudulent content more credible, v. ENISA, Threat Landscape 2025, <https://www.enisa.europa.eu/sites/default/files/2026-01/ENISA%20Threat%20Landscape%202025_v1.2.pdf>; FBI, 2024 Internet Crime Report, <https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf>; FBI, Criminals Use Generative AI to Facilitate Financial Fraud, <https://www.ic3.gov/PSA/2024/PSA241203>; <https://www.enisa.europa.eu/sites/default/files/2026-01/ENISA%20Threat%20Landscape%202025_v1.2.pdf>

[24] APWG, Phishing Activity Trends Report, 2nd Quarter 2025, <https://docs.apwg.org/reports/apwg_trends_report_q2_2025.pdf>

[25] APWG, Phishing Activity Trends Report, 4th Quarter 2025, <https://docs.apwg.org/reports/apwg_trends_report_q4_2025.pdf>:

[26] ENISA, Threat Landscape 2025, <https://www.enisa.europa.eu/sites/default/files/2025-11/ENISA%20Threat%20Landscape%202025.pdf>

[27] Microsoft, Microsoft Digital Defense Report 2025, <https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Microsoft-Digital-Defense-Report-2025.pdf>

[28] FBI, Don’t Let Scammers Ruin Your Holiday Season, 2025, <https://www.fbi.gov/news/press-releases/dont-let-scammers-ruin-your-holiday-season>

[29] FBI, Criminals Use Generative AI to Facilitate Financial Fraud, <https://www.ic3.gov/PSA/2024/PSA241203>, che segnala l’uso di contenuti generativi per rendere più convincenti le frodi finanziarie.

[30] Federal Trade Commission, Fighting back against harmful voice cloning, <https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning>, sul rischio di clonazione vocale e sulle risposte di tutela del consumatore.

[31] Hong Kong Police Force, comunicato stampa del febbraio 2025 sul contrasto a sindacati criminali che usavano deepfake e identità sintetiche in frodi online, con 58 arresti e perdite stimate vicino a HK$400 milioni, <https://www.police.gov.hk/ppp_en/03_police_message/pr/press-release-detail.html?refno=P202502110002>

[32] FBI, 2025 alert on virtual kidnapping and the use of altered images as false “proof of life”: <https://www.fbi.gov/investigate/cyber/alerts/2025/criminals-using-altered-proof-of-life-media-to-extort-victims-in-virtual-kidnapping-for-ransom-scams#:~:text=The%20Federal%20Bureau%20of%20Investigation,virtual%20kidnapping%20for%20ransom%20scams>

[33] <https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning>

[34] <https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf>

[35] Sul nesso tra GenAI, sextortion e creazione o alterazione di immagini sessualmente esplicite, v. NCMEC, CyberTipline Data 2024, <https://www.ncmec.org/gethelpnow/cybertipline/cybertiplinedata>; NCMEC, Generative AI, <https://ncmec.org/netsmartz/topics/generative-ai>; NCMEC, 2024 in Numbers, <https://www.ncmec.org/blog/2025/ncmec-releases-new-data-2024-in-numbers>

[36] The Economist, L’intelligenza artificiale sta rivoluzionando l’industria del porno, in Internazionale, 30 gennaio 2026,https://www.internazionale.it/notizie/2026/01/30/amp/intelligenza-artificiale-porno

[37] NCMEC, CyberTipline Data 2024, <https://www.ncmec.org/gethelpnow/cybertipline/cybertiplinedata>: 20.5 million reports, equivalent to 29.2 million distinct incidents; more than 546,000 cases of online enticement, with +192% compared with 2023; NCMEC, 2024 in Numbers,<https://www.ncmec.org/blog/2025/ncmec-releases-new-data-2024-in-numbers>:an increase of 1,325% in reports involving Generative AI and almost 100 reports a day of financial sextortion; at least 36 adolescents dead by suicide since 2021 after sextortion victimization.

[38] https://www.theguardian.com/uk-news/2024/oct/28/man-who-used-ai-to-create-child-abuse-images-jailed-for-18-years

[39] Crown Prosecution Service, caso Hugh Nelson, <https://www.cps.gov.uk/cps/news/man-who-used-ai-technology-create-child-sexual-abuse-images-jailed>: an 18-year sentence for the creation with AI and 3D software of child-abuse images starting from ordinary photographs.

[40] Internet Watch Foundation, Annual Data & Insights 2024, <https://www.iwf.org.uk/annual-data-insights-report-2024/> e <https://www.iwf.org.uk/annual-data-insights-report-2024/data-and-insights/ai-generated-child-sexual-abuse/>: 245 reports relating to AI-generated images of child sexual abuse, with an increase of 380% compared with 2023.

[41] <https://www.cps.gov.uk/cps/news/man-who-used-ai-technology-create-child-sexual-abuse-images-jailed>

[42] <https://www.devita.law/fumetti-pedopornografia-cassazione/>

[43] M. Kosinski, What is black box artificial intelligence (AI)?, IBM, 29 ottobre 2024, https://www.ibm.com/think/topics/black-box-ai

[44] UCL – London’s Global University, ‘‘Deepfakes’ ranked as most serious AI crime threat’,

https://www.ucl.ac.uk/news/2020/aug/deepfakes-ranked-most-serious-ai-crime-threat; Europol Innovation Lab, Facing reality? Law enforcement and the challenge of deepfakes ,<https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf>; Europol Newsroom, <https://www.europol.europa.eu/media-press/newsroom/news/europol-report-finds-deepfake-technology-could-become-staple-tool-for-organised-crime>; <https://www.nist.gov/publications/guardians-forensic-evidence-evaluating-analytic-systems-against-ai-generated-deepfakes>

[45] For further discussion of anti-forensics, see R. De Vita, V. Guerrrisi, A. Laudisa, M. Della Bruna, La prova digitale nel processo penale, DEVITALAW, 2 maggio 2025, pag. 281 e ss.

[46] <https://www.nist.gov/publications/guardians-forensic-evidence-evaluating-analytic-systems-against-ai-generated-deepfakes>

[47] https://www.europarl.europa.eu/RegData/etudes/STUD/2021/690039/EPRS_STU(2021)690039_EN.pdf

[48] Freeman 2021

[49] Ruff, 2021

[50] Cover, 2022; Langa, 2021; Sloot & Wagensveld, 2022a

[51] Al-Mulla, 2022; Delfino, 2022; Fallis, 2021; Mullen, 2021; Ruff, 2022

[52] <https://www.nist.gov/publications/guardians-forensic-evidence-evaluating-analytic-systems-against-ai-generated-deepfakes>

[53] Cover, 2022

[54] <https://www.nist.gov/publications/guardians-forensic-evidence-evaluating-analytic-systems-against-ai-generated-deepfakes>

[55] Europol, EU Serious and Organised Crime Threat Assessment (SOCTA) 2025, <https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf>; Regolamento (UE) 2024/1689 (AI Act), <https://eur-lex.europa.eu/legal-content/IT/TXT/PDF/?uri=OJ:L_202401689> ; Legge 23 settembre 2025, n. 132, <https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg>

[56] Regolamento (UE) 2024/1689 del Parlamento europeo e del Consiglio del 13 giugno 2024, <https://eur-lex.europa.eu/legal-content/IT/TXT/PDF/?uri=OJ:L_202401689> e Legge 23 settembre 2025, n. 132, <https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/SG>

[57] AI Act, art. 5, <https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5>

[58] AI Act, art. 50, <https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50> on transparency obligations for content generated or artificially manipulated, including deepfakes.

[59] https://digital-strategy.ec.europa.eu/it/library/digital-omnibus-regulation-proposal

[60] AI Act, art. 4, <https://ai-act-service-desk.ec.europa.eu/it/ai-act/article-4>

[61] AI Act, arts. 53 and 55, on obligations for general-purpose AI models and GPAI models with systemic risk: <https://ai-act-service-desk.ec.europa.eu/it/ai-act/article-53> e <https://ai-act-service-desk.ec.europa.eu/it/ai-act/article-55>

[62] <https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-53>

[63] Regolamento UE 2022/2065.

[64] Law 23 september 2025, n. 132, Art. 3, <https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg>

[65] Law 23 september 2025, n. 132, art. 4, https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg >

[66] Law 23 september 2025, n. 132, art. 15, <https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg>, which always reserves to the magistrate the decision on the interpretation and application of the law, on the assessment of facts and evidence, and on the adoption of measures.

[67] Law 23 september 2025, n. 132, art. 20, which designates AgID and ACN as national authorities for AI, and art. 24(2), on the conferral of powers of supervision, inspection, and sanction and on coordination with Article 99 AI Act. <https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg>,

[68] Law 23 september 2025, n. 132, art. 24, commi 3-5, <https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg>, on the delegation to the Government to adapt and specify the discipline of cases of unlawful creation and use of AI systems, with tools for inhibiting dissemination, removing content, and possible autonomous criminal offenses.

[69] Law 23 september 2025, n. 132, art. 26, <https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg>, che introduce l’aggravante comune per reati commessi mediante AI e il nuovo art. 612-quater c.p. “Illecita diffusione di contenuti generati o alterati con sistemi di intelligenza artificiale”.

Autori

Condividi