In the contemporary digital space, falsehood is no longer an anomaly: it is a technique of influence, an economic factor, and an instrument of power. Fake news, manipulated reviews, synthetic content, and deepfakes act in an increasingly pervasive way on the formation of public opinion, on consumer choices, and on the very reliability of the information ecosystem. Online falsehood does not merely distort the truth, but constructs credible alternative realities, capable of shaping consensus, reputation, and the market. In this scenario, artificial intelligence amplifies the quantity, speed, and sophistication of manipulation, making the boundary between authenticity and artifice more subtle. What is at stake concerns not only the correctness of information or commercial fairness, but also the resilience of essential values such as transparency, trust, and freedom of choice. Countering these phenomena therefore means committing to the creation of legal, technological, and cultural forms of protection: defending truth as a public good in an environment in which deception can now be produced, disseminated, and monetized on an industrial scale.
1. Data: the quantitative dimension and social perception of the phenomenon
Digital disinformation has assumed an unprecedented quantitative and qualitative dimension. Statistical data in fact highlight converging trends: the growing sophistication of fake news, the increase in content manipulated through artificial intelligence tools, the difficulty for users in distinguishing between authentic and synthetic content, and the high exposure of younger segments of the population to digital information flows.
According to the Reuters Institute for the Study of Journalism’s Digital News Report 2025, 58% of respondents globally believe that distinguishing between true and false information online has become increasingly difficult. In the same report, overall trust in the media averages around 32%, with strong differences between countries, while concern about the spread of online disinformation remains high in most of the areas analyzed, exceeding 50% in several European contexts and in the United States[1].
Similar evidence emerges from an international survey conducted in 2023 by UNESCO in collaboration with Ipsos, which shows that a large majority of Internet users globally declare that they have come into contact with fake news on social networks, with perceived impacts particularly in the fields of political and health information. The study finds that over 85% of respondents fear the political impact of disinformation, while 53% identify artificial intelligence as a factor amplifying the phenomenon. A significant share of users also acknowledge having difficulty distinguishing authentic content from manipulated content, especially when this is generated through artificial intelligence techniques [2].
In the European context, several studies indicate an increase in perceived exposure to disinformation, with some countries being particularly vulnerable to information manipulation campaigns. Among these, Italy is also frequently mentioned, where the spread of disinformation content appears particularly high compared to the European average [3].
According to a 2025 European Union Report, 82% of the European sample (86% of the Italian sample) believe they have been exposed to disinformation and fake news often or at least sometimes in the previous seven days. And although just over six respondents out of ten feel confident that they can recognize disinformation when they encounter it (61%), around three respondents out of ten do not trust their own ability to recognize disinformation [4].
Among younger generations, the phenomenon is even more significant. Studies on digital literacy and news consumption habits among young people show that social networks and online platforms are increasingly the main access channel to information, but also the place where young users are most exposed to non-verified, decontextualized, or deliberately manipulative content. [5]
In the Italian context, recent reports by Censis and other research centers confirm a widespread perception of the seriousness of the problem, but also a persistent underestimation of its effects by part of the population. If on the one hand many citizens declare themselves concerned about the spread of fake news, on the other hand a significant share still considers disinformation a secondary phenomenon or believes itself able to recognize it easily, despite empirical evidence suggesting otherwise [6].
To this must be added the increasingly visible role of artificial intelligence in accelerating the production of manipulative content. Recent analyses show a significant increase in information sites generated wholly or largely through automated systems, as well as a growing use of synthetic images and videos in the digital environment [7].
At the global level, numerous analyses highlight a significant increase in disinformation produced through generative artificial intelligence tools: the use of automatic language and image generation models has reduced the production costs of manipulated content, making possible the large-scale creation of false but highly plausible articles, images, and videos [8].
Data reported by Agência Brasil in 2026 indicate, for example, that in the Brazilian context disinformation content generated through artificial intelligence more than tripled between 2024 and 2025 [9]. Further analyses have been conducted by independent organizations such as NewsGuard, which has documented the proliferation of hundreds of websites automatically generated through artificial intelligence tools and designed to graphically imitate reliable news outlets. Such sites publish automatically or semi-automatically generated content, contributing to amplifying the spread of manipulated or unverified information in the digital space [10].
At the same time, a growing number of economic studies highlight the relevance of the phenomenon in the context of the digital market and the online reputation of businesses. Empirical analyses show that even minimal variations in the ratings of review platforms can produce economically significant effects on business revenues, confirming the role of digital reputation as a strategic asset in the platform economy. [11].
According to some international estimates, between 10% and 30% of online reviews may be manipulated or inauthentic, with significant effects on consumer behavior. Some also indicate that an artificial increase in the rating of a product or service can generate revenue increases of up to 15–20%, highlighting how reputational manipulation may alter not only the flow of information, but also the competitive balance of the market. [12].
These dynamics are further amplified by the growing spread of automation and content generation tools through artificial intelligence in the digital marketing sector; an exponential growth has been recorded in automated systems for generating reviews and comments, often used to simulate consensus or to artificially alter the appeal of products and services [13].
The qualitative leap of the phenomenon emerges with particular clarity in its growing geopolitical dimension: in recent years, in fact, numerous investigations have documented the existence of coordinated disinformation operations aimed at influencing electoral processes and political debates in various countries. Among these, the so-called “Operation Doppelgänger” has been widely analyzed, attributed to pro-Russian propaganda networks and based on the graphic replication of European information websites, on the coordinated dissemination of manipulated content relating to the conflict in Ukraine and to electoral consultations in different countries of the European Union [14].
The Institute for International Political Studies (ISPI) has stressed how fake news, generative artificial intelligence, and foreign interference now constitute forms of hybrid threat to liberal democracies, capable of affecting simultaneously informational security, public debate, and institutional stability [15].
n this context, recent episodes — such as the disinformation campaigns identified between 2024 and 2025 in Moldova on the occasion of major electoral consultations — show how information manipulation can be used as an instrument of geopolitical pressure and strategic influence [16]. According to the IDMO (European Digital Media Observatory) and the Microsoft Threat Analysis Center, the use of generative artificial intelligence is further expanding the capacity for producing and disseminating propagandistic and manipulated content, making influence operations increasingly sophisticated and difficult to detect [17].
2. From episodic disinformation to structural disinformation: disinformation as a systemic phenomenon
In the initial phase of Internet development, disinformation was generally interpreted as an episodic distortion of the information flow, attributable to individual actors or to specific propaganda campaigns. However, the evolution of digital platforms and of economic models based on attention has progressively transformed this phenomenon into a structural component of the contemporary communicative ecosystem.
Today disinformation no longer constitutes a marginal anomaly of the information system, but rather a systemic dynamic closely connected to the functioning of digital infrastructures. The architecture of online platforms is in fact based on recommendation algorithms, user profiling systems, and economic models oriented toward maximizing engagement. In this context, content that arouses intense emotional reactions — indignation, fear, conflict — tends to be more strongly amplified by mechanisms of algorithmic distribution, favoring the spread of polarizing or manipulative narratives [18].
To this dynamic is added the growing impact of generative artificial intelligence, which has further reduced the costs of producing and distributing content. Automatic generation systems now make it possible to produce on a large scale plausible texts, images, and audiovisual products, often difficult to distinguish from authentic ones. Disinformation thus becomes replicable, adaptable, and customizable, since it can be modeled according to the recipient and disseminated through highly targeted campaigns [19].
The result is a qualitative mutation of the information ecosystem in which information asymmetry no longer concerns only individual relations between sender and recipient, but tends to affect the entire digital public space, changing the conditions under which opinions, economic decisions, and political choices are formed. Thus, the phenomenon contributes to the erosion of trust in institutions and in authoritative sources of information and can deeply affect the quality of public debate and the stability of democratic systems.
From this dual impact there progressively emerges a composite legal interest, which may be identified in the integrity of the information ecosystem. Protection cannot be entrusted exclusively to the traditional instruments of information law or competition law. Rather, it requires a multi-level regulatory approach, capable of integrating normative interventions, responsibilities of digital platforms, instruments of algorithmic transparency, and policies of “digital literacy.” In this context, the regulation of the information ecosystem appears destined to become one of the main fields of development of digital economy law and of the governance of online platforms.
3. Generative artificial intelligence, deepfakes, and new frontiers of disinformation
The most recent developments, in this field and beyond, are closely connected to the development of generative artificial intelligence systems, capable of producing increasingly realistic textual, visual, and audiovisual content.
Generative models based on deep learning techniques, such as large language models and audiovisual synthesis systems, now allow the automated production of articles, images, voice recordings, and manipulated videos with such a degree of verisimilitude as to make the distinction between authentic content and artificially generated content extremely complex. In this context lies the phenomenon of so-called “deepfakes,” namely synthetic audiovisual content created through artificial intelligence techniques that make it possible to alter or recreate a person’s image and voice in an extremely realistic way [20].
The spread of these technologies introduces a new dimension: whereas in previous phases informational manipulation was based mainly on the selection and reinterpretation of real content, generative artificial intelligence now allows the creation of completely artificial informational events, making possible the large-scale production of plausible but entirely fictitious narratives. Recent studies highlight how such tools can also be employed within actual information manipulation campaigns, further amplifying the capacity for disinformation to spread in the digital environment [21].
In particular, the possibility of using AI to introduce into the network, in a massive manner, realistic images capable of imprinting themselves in users’ memory and becoming viral, may give rise to the so-called “Mandela effect”: a false collective memory in which different people remember the same incorrect details concerning events, images, or statements. An example of this effect, in relation to artificial intelligence, is the fake photo of Pope Francis wearing an enormous white Balenciaga coat, which became viral and entered the (false) collective imagination [57]. An effect that stems from the empirical and neuropsychological observation that memory (including autobiographical memory, made up of semantic and episodic recollections) retains traces of what was perceived and of emotional experience and progressively forgets the origin and source: the psychological truth prevailing over objective truth. A dynamic dramatically amplified by artificial intelligence and by the compulsive bulimia for digital content fueled by algorithms.
Faced with such risks, the European legislator has begun to introduce specific legal obligations aimed at ensuring greater transparency of artificially generated content. The European Artificial Intelligence Regulation (AI Act) provides, for example, that AI systems capable of generating synthetic content must adopt suitable measures to make such content clearly identifiable as artificial. In particular, providers of generative AI systems are required to inform users when they interact with content produced or manipulated by means of artificial intelligence [22].
These regulatory interventions reflect growing awareness: in the era of generative artificial intelligence, protecting the information ecosystem requires not only instruments to combat traditional disinformation, but also mechanisms capable of guaranteeing the traceability and recognizability of synthetic content. The challenge for contemporary law therefore lies in developing forms of regulation that make it possible to enhance the innovative potential of artificial intelligence without compromising the reliability of the information environment.
4. Fake news: misinformation, disinformation, and propaganda
In the contemporary scientific and institutional debate, the phenomenon of fake news is increasingly analyzed through a conceptual distinction between misinformation, disinformation, and propaganda. Categories which, although sharing the element of untruthfulness or manipulation of information, differ in structure, purpose, and degree of intentionality.
Misinformation consists in the dissemination of false or inaccurate information circulating in the public space without a specific manipulative intent. In such cases, the falsity of the content often derives from interpretative errors, misunderstandings, or the unconscious reproduction of unverified news. The subjective element that characterizes misinformation is therefore the absence of intent: those who disseminate the content do not act with the aim of deliberately altering the informational process, but nonetheless contribute to the circulation of inaccurate or misleading content within the digital ecosystem [23].
Different is the case of disinformation, which arises when false information is disseminated knowingly and deliberately, with the objective of influencing perceptions, opinions, or collective behaviors. The qualifying element of disinformation may be identified in the intentional will to alter the perception of reality through manipulation of the information flow. In this sense, disinformation represents not merely a pathological deviation of the communicative process, but a true strategy of influence, capable of affecting the functioning of markets, public debate, and, in the most extreme cases, democratic processes [24].
Alongside these phenomena lies digital propaganda, which may be defined as the systematic and organized use of disinformation for political, ideological, or strategic purposes. Unlike episodic forms of informational manipulation, propaganda is characterized by the presence of organized structures, coordinated campaigns, and techniques of artificial amplification of content, often carried out through networks of automated accounts, coordinated influencers, or micro-targeting strategies. In the context of digital platforms, such dynamics take on particular relevance because the algorithmic architecture of recommendation systems tends to privilege content with a high capacity to generate engagement, that is, intense emotional interactions, polarization, and conflict [25].
A field in which such dynamics emerge with particular clarity is that of armed conflicts and hybrid wars, where fake news do not consist merely in the episodic dissemination of false content, but are inserted into broader strategies of influence and narrative manipulation. One may think, for example, of the already mentioned pro-Russian campaigns linked to operation “Doppelgänger,” based on the graphic cloning of European journalistic outlets and on the dissemination of content constructed to alter the perception of the conflict in Ukraine and to steer public debate in Union countries. Or one may look, in the context of the war between Israel and Hamas, at the circulation, especially in the days immediately following 7 October 2023, of stock images and videos taken from other theaters of war or from previous events, relaunched as though they documented in real time bombardments, attacks, or atrocities of the ongoing conflict, with the effect of maximizing the emotional impact of the news and steering public reaction in a polarized direction. Likewise, the delegitimization campaigns directed against humanitarian actors, as in the case of UNRWA, which denounced the systematic dissemination of false and manipulated information used as an instrument of war to discredit its work and indirectly hinder its humanitarian action [26].
An equally delicate field concerns disinformation campaigns directed against judges, magistrates, and judicial institutions. In various European contexts, the strategic dissemination of false or manipulated narratives has been used as an instrument of delegitimization of the judiciary, with the objective of undermining public trust in the impartiality of judicial bodies and, more generally, in the rule of law [27]. These phenomena are particularly relevant because they act on one of the essential presuppositions of constitutional democracies: citizens’ confidence in the independence and neutrality of those called upon to apply the law. In this perspective, media and digital smear campaigns may constitute an indirect pressure mechanism on the judiciary, capable of producing systemic effects not unlike those arising from more traditional forms of intimidation [28]. On a comparative level, the phenomenon has taken on particularly evident forms both in Eastern Europe — where there has been no shortage of organized smear campaigns against judges perceived as hostile to political power — and in the United States, where Chief Justice Roberts has expressly included disinformation and intimidation among the main contemporary threats to judicial independence [29]. As regards the judicial police, the qualitative leap brought about by deepfakes now makes it possible to fabricate synthetic audio and video suitable not only for fraud or impersonation, but also for discrediting investigators, casting doubt on the genuineness of investigative activities, or contaminating the informational context in which digital evidence is perceived and assessed, so much so that Europol considers the capacity to prevent and detect such content a strategic priority for law enforcement agencies [30].
Moreover, the algorithms that regulate the visibility of content on the main social platforms operate on the basis of attention-optimization and user-retention logics. As mentioned above, this economic model, founded on the so-called “attention economy” [31], tends inevitably to favor the dissemination of sensationalistic, emotionally polarizing, or controversial content, which is statistically more capable of generating shares, comments, and reactions [32].
A further manifestation of these dynamics can be observed when disinformation is grafted onto electoral competitions already exposed to criminal, clientelistic, or corrupt conditioning: in these contexts, fake news cease to be mere false content and become an instrument through which territorial control, opaque intermediation, and exchange of utilities are transferred onto the symbolic and reputational plane, altering the perception of candidates, delegitimizing opponents, and progressively restricting the space for public debate [33]. In this perspective, delegitimizing campaigns and support campaigns no longer appear as distinct phenomena, but as two sides of the same strategy of influence: the former aim to isolate candidates, administrators, journalists, or guarantee institutions through unfounded accusations, insinuations, or narratives of fraud; the latter construct around certain candidates an artificial consensus, simulating social rootedness, reliability, and the inevitability of the electoral outcome [34]. In the Italian context, the phenomenon emerges above all as a structural vulnerability of the local information environment: AGCOM has described disinformation as a measurable phenomenon of the national information system. In 2024 the Ministry of the Interior recorded 630 acts of intimidation against local administrators, an increase of 13.9 percent compared with the previous year, and the literature on the relationship between the mafia and electoral competition has shown, on the one hand, the electoral support assured in some contexts by mafia organizations and, on the other, the increase in violence against political figures close to the vote [35].
Outside the national context, Operation “Teatro Invisível” in Brazil concerned a criminal organization dedicated to disseminating false news about mayoral candidates in more than ten municipalities in the State of Rio de Janeiro, while the subsequent “Teatro Invisível II” linked that disinformation circuit to hypotheses of money laundering, procurement fraud, and destruction of digital evidence; in Moldova, several investigations brought to light large illicit Russian financing aimed at electoral manipulation, also through the strategic use and organized dissemination of fake news on Facebook, TikTok, and Telegram.
The overall result of this dynamic is a crisis of epistemic authority in the contemporary information ecosystem. This results in a growing difficulty for citizens and consumers in distinguishing reliable information from manipulated content, with relevant effects not only on the cultural and social level, but also on the legal and economic one [36].
5. False reviews, marketing strategies, and misleading practices
A particularly significant phenomenon of disinformation in the digital economy is represented by false online reviews, which constitute a specific form of informational manipulation capable of directly affecting the mechanisms by which the market functions.
In the context of digital platforms, user reviews have assumed an increasingly relevant function in the construction of the commercial reputation of businesses, products, and services. They now represent one of the main tools through which consumers orient their economic choices, performing an informative function which, in fact, integrates or replaces traditional forms of advertising and commercial communication. According to numerous studies, a very high share of consumers systematically consults online reviews before making purchases or choosing services, attributing to such content a high degree of reliability [37].
It is precisely the economic centrality of digital reputation that has favored the spread of manipulative practices aimed at artificially altering public perception of products and services. Among the most widespread techniques are the publication of artificial positive reviews, aimed at improving a company’s reputation, and the dissemination of coordinated negative reviews, directed at damaging competitors. These phenomena fall within “astroturfing” strategies, that is, the artificial creation of apparent consensus through content presented as spontaneous but in reality orchestrated by interested parties [38].
Alongside astroturfing, the practice of so-called “review bombing” has also spread, consisting in the massive publication of coordinated negative reviews, often motivated not by actual consumer experiences but by organized campaigns with reputational, ideological, or commercial purposes. Such dynamics may significantly affect the visibility of products on digital platforms and, consequently, consumer choices [39].
The problem of false reviews assumes relevance not only on an ethical or informational level, but also on a legal and economic one, since it affects various interests protected by the legal system. First of all, manipulation of reviews alters the proper functioning of the digital market, producing distortive effects on competition among economic operators. Secondly, it compromises consumers’ right to receive truthful and transparent information, an essential element for the exercise of informed purchasing choices.
Precisely for these reasons, European law has progressively recognized the legal relevance of online reviews within the discipline of unfair commercial practices. Directive (EU) 2019/2161 — known as the Omnibus Directive — introduced specific transparency obligations for digital platforms, requiring traders who publish reviews to indicate whether and how it has been verified that such reviews come from consumers who have actually purchased or used the reviewed product or service [40].
In parallel, the Digital Services Act has imposed stricter obligations on online platforms regarding algorithmic transparency, management of illegal content, and responsibility in content moderation. Although the regulation does not specifically govern false reviews, it introduces legal instruments intended also to affect such phenomena indirectly, in particular through obligations of systemic risk management and greater accountability of digital platforms [41].
6. The Italian regulatory framework
In the Italian legal system, any legal intervention regarding disinformation must necessarily confront the fundamental principle laid down in Article 21 of the Constitution, which recognizes everyone’s right to freely express their thoughts through speech, writing, and every other means of dissemination.
However, this freedom is not absolute in character, but encounters limits deriving from the need to protect other constitutional values of equal relevance, including the dignity of the person, individual reputation, and public order.
The exercise of freedom of information must nevertheless respect certain fundamental criteria, including the truthfulness of the facts reported, the relevance of the information, and expressive restraint. The balancing between freedom of expression and protection of reputation therefore represents one of the central junctions of information law.
In Italy, the legislation aimed at governing the fight against disinformation phenomena in the broad sense is not organic and unified. Protection is articulated rather through a plurality of legal instruments belonging to different sectors of the legal system, which operate on complementary levels.
On the one hand, the protections of the Consumer Code (Legislative Decree no. 206/2005) are certainly relevant, as regards misleading commercial practices under Articles 20 and following [42] , as is the legislation on unfair competition provided for by Article 2598 of the Civil Code, or civil liability for reputational damage. On the other hand, our legal system also provides for various hypotheses of potential criminal relevance of disinformation conduct.
One of the most frequent hypotheses is aggravated defamation by means of publicity, provided for by Article 595, paragraph 3, of the Criminal Code. This offense occurs when defamatory content is disseminated through means suitable for reaching an indeterminate number of persons, a category in which case law has also included digital platforms and social networks [43].
Where false news is capable of disturbing public order or generating alarm in the community, the offense of causing alarm to the authorities, provided for by Article 658 of the Criminal Code, may also arise. This provision has sometimes been invoked in relation to the dissemination of false information concerning health emergencies, catastrophic events, or situations of danger to public safety.
Disinformation may also assume criminal relevance in the economic-financial field. Article 185 of the Consolidated Law on Finance (TUF), in fact, punishes conduct of market manipulation, which may include the dissemination of false or misleading news capable of influencing the price of financial instruments.
In cases where disinformation campaigns are carried out through specific computer techniques, the offenses of fraud (Article 640 of the Criminal Code), impersonation (Article 494 of the Criminal Code), frequently associated with the creation of false profiles on social networks, as well as the offenses of unauthorized access to computer systems (Article 615-ter of the Criminal Code) and unlawful interference in digital communications (Articles 617-quater and 617-sexies of the Criminal Code) may be integrated. The offense of fraud, moreover, could also be hypothesized in relation to false online reviews, when such conduct is instrumental in obtaining an unjust profit.
However, none of these offenses is capable of guaranteeing unified protection (at least ex post) with respect to the broader and more multifaceted phenomenon described so far. The failure to introduce a specific sanctioning discipline is perhaps the symptom of a limited perception of the actual harmful scope of this set of conducts, which from time to time are analyzed and narrated individually in the news and in public debate.
In recent years, moreover, a process of rethinking the discipline relating to online reviews has begun. In particular, some legislative proposals — including the bill on small and medium-sized enterprises, definitively approved by the Senate on 4 March 2026 and awaiting publication [44] – have provided for the introduction of specific obligations of transparency and traceability of digital reviews, as well as the recognition of protective instruments such as the right of reply and the authentication of reviews published on platforms.
7. The evolution of the European regulatory framework: Digital Services Act, Digital Markets Act, Omnibus Directive, and AI Act
The growing institutional attention to the phenomenon of disinformation reflects an awareness that is by now widespread at the European and international level. In 2024 the Vice-President of the European Commission Věra Jourová defined the systemic spread of fake news as “a real danger to democracy,” underlining how informational manipulation represents one of the main challenges for protecting informational pluralism and for the proper functioning of contemporary democratic systems [45].
In recent years the European Union has adopted a series of regulatory instruments which, although pursuing different purposes, contribute to outlining an increasingly articulated normative system for governing digital platforms.
One of the pillars of this architecture is the Digital Services Act (DSA), Regulation (EU) 2022/2065, fully in force since 2024. The DSA introduces a set of progressive obligations for providers of digital services, with particular attention to Very Large Online Platforms (VLOPs) and large search engines. Among the most innovative elements of the regulation are obligations of transparency of recommendation algorithms, systems for reporting and removing illegal content, as well as the provision for periodic assessments of systemic risks connected to the functioning of platforms. Among such risks, the regulation expressly includes the dissemination of disinformation and the manipulation of democratic processes [46].
Alongside the DSA stands the Digital Markets Act (DMA), Regulation (EU) 2022/1925, which addresses the problem of concentration of economic power in the digital platforms sector. The DMA introduces specific obligations for so-called gatekeepers, namely the large platforms that control digital infrastructures essential for access to online markets. Although the regulation is primarily oriented toward protecting competition, it also affects indirectly the circulation of information in the digital environment, by imposing limits on self-preferencing practices and strengthening interoperability among digital services [47].
In parallel, the European Union has strengthened consumer protection in the context of digital commerce through Directive (EU) 2019/2161, known as the Omnibus Directive. This act, in amending the discipline of unfair commercial practices and consumer rights, introduces specific transparency obligations relating to online reviews. In particular, traders who publish reviews must indicate whether and how it has been verified that such reviews come from consumers who have actually used or purchased the reviewed product or service. The dissemination of false or manipulated reviews may therefore constitute a misleading commercial practice, with consequent legal responsibilities [48].
More recently, the European regulatory framework has been further broadened by the adoption of the Artificial Intelligence Regulation (AI Act), approved in 2024. Although not specifically dedicated to the phenomenon of disinformation, the AI Act introduces important transparency obligations for artificial intelligence systems capable of generating synthetic content, including images, audio, and realistic videos commonly defined as deepfakes. The regulation provides, in fact, that such content must be clearly identifiable as artificially generated, in order to prevent it from being used to manipulate public opinion or disseminate misleading information [49].
Taken together, these normative instruments outline a significant transformation of European law regarding digital platforms. Whereas in the initial phase of Internet development regulation had concentrated mainly on intermediary liability and on the freedom of circulation of information, the new European regulatory framework appears increasingly oriented toward protecting the integrity of the digital information ecosystem.
From this perspective, the regulation of platforms no longer concerns only the removal of illegal content, but extends to the management of systemic risks connected to the functioning of digital infrastructures, including disinformation, algorithmic manipulation, and distortions of the digital market. The European Union therefore appears to pursue a regulatory model that aims to balance technological innovation with the protection of the fundamental values of the European legal order, including freedom of expression, consumer protection, fair competition, and the proper functioning of democratic processes.
8. Evidentiary profiles and practical difficulties
Despite the presence of civil, administrative, and criminal regulatory instruments suitable for repressing some forms of informational manipulation, the effectiveness of judicial action often stops in the face of structural obstacles of an evidentiary and practical nature.
One of the main critical factors, in fact, is represented by the anonymity or pseudonymization of users online. Digital platforms in fact allow the creation of accounts through non-verifiable or easily falsifiable identities, making it more difficult to identify with certainty the subjects responsible for the dissemination of unlawful content. The use of fictitious digital identities, coordinated profiles, or temporary accounts often allows the authors of disinformation campaigns to operate without leaving traces easily attributable to their real identity [50].
A further element of complexity is represented by the growing use of bots and automated systems for disseminating content. In such cases, reconstructing individual responsibilities becomes particularly complex, since the dissemination of content occurs through distributed technological infrastructures and at times is difficult to trace back to specific legal subjects [51].
To the technical difficulties are added, moreover, problems of a territorial and jurisdictional nature. Digital platforms operate on a global scale and content may be generated, hosted, and disseminated through servers located in different States. This transnational dimension makes it more complex to identify the competent jurisdiction and to apply effectively the instruments of international judicial cooperation, especially in cases where the subjects involved operate from countries with different regulations or with limited mechanisms of judicial collaboration [52].
9. Reform perspectives
The analysis of the legal, technological, and economic profiles of disinformation highlights how the fight against such phenomena cannot be entrusted exclusively to traditional repressive instruments. The systemic nature of the digital information ecosystem, characterized by algorithmic intermediation, the globality of platforms, and the speed of dissemination of content, requires a structural and multi-level approach, capable of integrating normative, technological, and cultural instruments.
From this perspective, strengthening policies to combat disinformation must necessarily move along several complementary directions.
A first field of intervention concerns the strengthening of transparency obligations for digital platforms. The growing centrality of algorithms in the selection and distribution of content makes it in fact necessary to develop more advanced mechanisms of transparency and responsibility with respect to the criteria that determine the visibility of information online. In this direction move the initiatives of European regulations introducing obligations of algorithmic accountability, requiring large digital platforms to assess and mitigate the systemic risks connected to the dissemination of disinformation content [53].
A second profile concerns the greater reliability of online reviews, which today represent one of the most influential tools in the formation of consumption decisions. The introduction of Step-up Authentication systems or other advanced mechanisms for verifying users’ identities could help reduce the spread of false or manipulated reviews, and this would make it possible to strengthen the link between digital identity and published content, increasing the traceability of online interactions.
Alongside such instruments, systems of certification of informational sources may also be accompanied, capable of providing users with reliable indicators on the origin and reliability of content. A further strategic element consists in promoting independent information verification systems (fact-checking), which may operate in a transparent and verifiable way also with respect to online reviews and user-generated content. Such mechanisms, if adequately structured and independent, may contribute to strengthening the quality of public debate and to reducing the spread of false or misleading information [54].
In parallel, strengthening policies of digital education and media literacy of individual citizens assumes a fundamental role. The ability to recognize manipulated content, verify sources, and understand the functioning of algorithms represents, in fact, one of the most effective tools for preventing disinformation. Numerous international institutions have highlighted how promoting digital culture constitutes an essential element for the resilience of democratic societies against informational manipulation campaigns [55].
Finally, a central role in reform perspectives is represented by the growing attention to the principles of algorithmic accountability. The algorithms that regulate the distribution of content on digital platforms significantly influence the visibility of information and the formation of public opinion. The possibility of subjecting such systems to forms of independent audit and regulatory verification represents one of the most promising tools for ensuring greater transparency and responsibility in the digital information ecosystem [56].
More than the spread of falsehood in itself, what today causes concern is the inadequacy of the legal system (Italian and European) to govern it in an organic way. There is still no unified discipline capable of preventing informational manipulation ex ante and, at the same time, countering its effects ex post with instruments effective and coherent with the nature of the phenomenon. The current system remains fragmented, episodic, entrusted to scattered rules and to remedies that are often tardy with respect to the speed with which deceptive content spreads, becomes established, and produces damage. Thus, while technology industrializes deception, the law too often continues to chase after it without fully understanding it. It is precisely in this asymmetry that the weakness of the public response is measured: there is not only a lack of an adequate sanction, there is a lack of an overall normative vision. As long as this gap persists, fake news and manipulations will continue to prosper in the cracks of a system that, rather than preventing the phenomenon, suffers its evolution.
Prof. Avv. Roberto De Vita
Avv. Giada Caprini
Avv. Marco Della Bruna
References
[1] Reuters Institute, Digital News Report 2025, <https://www.digitalnewsreport.org>
[2] UNESCO – Ipsos, Survey on the Impact of Online Disinformation and hate speech, 2023, <https://www.unesco.org/sites/default/files/medias/fichiers/2023/11/unesco_ipsos_survey.pdf>
[3] European Commission, Tackling Online Disinformation: A European Approach, <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236>
[4] https://europa.eu/eurobarometer/surveys/detail/3592
[5] <https://drive.google.com/file/d/1KwGqR4cB8Yg03YPbfSTXBIe-U_EaJ2s6/view> e
[6] Ital Communications – Censis, Disinformazione e fake news in Italia, 2023, <https://italcommunications.it/wp-content/uploads/2025/01/Rapporto-ItalCommunications-Censis-2023.pdf>;
[7] Infodata – Il Sole 24 Ore, <https://www.infodata.ilsole24ore.com/2025/12/11/per-tre-persone-su-dieci-la-disinformazione-non-e-un-grosso-problema-ed-e-questo-il-problema/?refresh_ce=1>
[8] European Parliament Research Service, Artificial Intelligence and Disinformation, <https://www.europarl.europa.eu>
[9] Agência Brasil, analisi sull’aumento di contenuti generati da IA nel 2026, <https://agenciabrasil.ebc.com.br>
[10] NewsGuard, AI-Generated News Sites Report, <https://www.newsguardtech.com>
[11] Luca M., Zervas G., Fake It Till You Make It: Reputation, Competition, and Yelp Review Fraud, Harvard Business School.
[12] OECD, Online Consumer Reviews: The Case of Fake Reviews and Consumer Protection, <https://www.oecd.org>
[13] Duivenvoorde, Generative AI and the future of marketing: A consumer protection perspective, Computer Law & Security Review 57, luglio 2025, <https://www.sciencedirect.com/science/article/pii/S2212473X25000148>
[14] EUvsDisinfo – Doppelgänger Campaign Analysis, <https://euvsdisinfo.eu/tag/doppelganger/?numberposts=20>
[15] ISPI, Disinformazione e minacce ibride alle democrazie, <https://www.ispionline.it>
[16] Investigative reports on disinformation campaigns in Moldova (2024–2025).
[17] IDMO – Microsoft Threat Analysis Center, Generative AI and Information Manipulation.
[18] S. Vosoughi, D. Roy, S. Aral, The Spread of True and False News Online, Science, 2018,
[19] European Parliamentary Research Service, Information manipulation in the age of generative artificial intelligence (2025), <https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/779259/EPRS_BRI(2025)779259_EN.pdf>
[20] Chesney, D. Citron,Deepfakes and the New Disinformation War, Foreign Affairs (2019), <https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war>
[21] European Parliamentary Research Service, Tackling deepfakes in European policy (2021), <https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2021)690039>
[22] Parlamento europeo, Artificial Intelligence Act – normativa europea sull’intelligenza artificiale, <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence>
[23] UNESCO, Journalism, Fake News and Disinformation. Handbook for Journalism Education and Training (2018),
<https://unesdoc.unesco.org/ark:/48223/pf0000265552>
[24] European Commission, Tackling Online Disinformation: A European Approach (2018), <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236>
[25] European Commission, Code of Practice on Disinformation (2022), <https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation>
[26] Cfr. EEAS, “Doppelganger strikes back: FIMI activities in the context of the EE24”, 2024, <https://euvsdisinfo.eu/uploads/2024/06/EEAS-TechnicalReport-DoppelgangerEE24_June2024.pdf>; EDMO, “Disinformation about Israel/Hamas conflict flooded the EU in October”, 2023, <https://edmo.eu/wp-content/uploads/2023/09/EDMO-29-Horizontal-FCB-updated.pdf>; European Commission, “Commission opens formal proceedings against X under the Digital Services Act”, 18 dicembre 2023, <https://digital-strategy.ec.europa.eu/en/news/commission-opens-formal-proceedings-against-x-under-digital-services-act>; UNRWA, “The spread of misinformation & disinformation continues to be used as a weapon of war in Gaza”, 2024, https://www.unrwa.org/newsroom/official-statements/spread-misinformation-disinformation-continues-be-used-weapon-war-gaza>.
[27] Cfr. OHCHR, “A/75/172: Disciplinary measures against judges and the use of “disguised” sanctions” (2020), <https://www.ohchr.org/en/documents/thematic-reports/a75172-disciplinary-measures-against-judges-and-use-disguised-sanctions>; v. anche ENCJ, “Statement by the Executive Board of the ENCJ On Pressure and Intimidation of Judges through Media”, 17 dicembre 2025, <https://obt-jud.hu/en/statement-executive-board-encj-pressure-and-intimidation-judges-through-media>
[28] Corte EDU, Grande Camera, “Morice v. France”, 23 aprile 2015, <https://hudoc.echr.coe.int/eng?i=001-154265>
[29] OKO.press, “Smear campaign coordinated by the Ministry of Justice, aimed to discredit Polish judges, discovered” (2019), <https://oko.press/why-did-the-polish-deputy-minister-of-justice-resign-everything-you-need-to-know-about-the-piebiak-scandal> e Supreme Court of the United States, “2024 Year-End Report on the Federal Judiciary”, <https://www.supremecourt.gov/publicinfo/year-end/2024year-endreport.pdf>
[30] Europol, “Facing reality? Law enforcement and the challenge of deepfakes”, <https://www.europol.europa.eu/publications-events/publications/facing-reality-law-enforcement-and-challenge-of-deepfakes>; v. anche Europol, “Europol report finds deepfake technology could become staple tool for organised crime”, 28 aprile 2022, <https://www.europol.europa.eu/media-press/newsroom/news/europol-report-finds-deepfake-technology-could-become-staple-tool-for-organised-crime>
[31] L’economia dell’attenzione e il paradosso che sta uccidendo i giornali, <https://www.feltrinellieducation.it/magazine/l-economia-dell-attenzione-e-il-paradosso-che-sta-uccidendo-i-giornali>
[32] Shoshana Zuboff, The Age of Surveillance Capitalism, PublicAffairs, 2019; v. anche Reuters Institute, Digital News Report 2024, <https://www.digitalnewsreport.org>
[33] Council of Europe, “Information Disorder: Toward an interdisciplinary framework for research and policy making”, <https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html> ); International IDEA – Clingendael Institute, “Protecting Politics: Deterring the Influence of Organized Crime on Elections”, <https://www.idea.int/publications/catalogue/protecting-politics-deterring-influence-organized-crime-elections>
[34] Council of Europe, “Information Disorder”, cit.; International IDEA – Clingendael Institute, “Protecting Politics”, cit., che descrive le modalità attraverso cui la criminalità organizzata interferisce nei processi elettorali.
[35] AGCOM, “News vs. Fake nel sistema dell’informazione”, <https://www.agcom.it/pubblicazioni/rapporti/news-vs-fake-nel-sistema-dellinformazione-interim-report-indagine>; Ministero dell’interno, “Atti intimidatori nei confronti degli amministratori locali”, <https://www.interno.gov.it/it/stampa-e-comunicazione/dati-e-statistiche/atti-intimidatori-nei-confronti-amministratori-locali> e “Report anno 2024”, <https://www.interno.gov.it/sites/default/files/2025-04/report_atti_intimidatori_amm_locali_anno_2024.pdf>; G. De Feo, G. De Luca, “Mafia in the Ballot Box”, in American Economic Journal: Economic Policy, https://www.aeaweb.org/articles?id=10.1257%2Fpol.20150551; A. Alesina, S. Piccolo, P. Pinotti, “Organized Crime, Violence, and Politics”, in The Review of Economic Studies, <https://academic.oup.com/restud/article/86/2/457/5060718>
[36] C. Wardle, H. Derakhshan, Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe (2017), <https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html>
[37] OECD, Online Consumer Reviews: The Case of Fake Reviews and Consumer Protection (2019),
<https://www.oecd.org/competition/consumer-policy/fake-reviews.htm>
[38] European Commission, Behavioural Study on Misleading Online Reviews (2020),
<https://op.europa.eu/en/publication-detail/-/publication/9f9b6d2a-0f0e-11ea-8c1f-01aa75ed71a1>
[39] M. Luca, G. Zervas, Fake It Till You Make It: Reputation, Competition, and Yelp Review Fraud, Harvard Business School Working Paper (2016), <https://www.hbs.edu/faculty/Pages/item.aspx?num=51974>
[40] Direttiva (UE) 2019/2161 del Parlamento europeo e del Consiglio (Omnibus Directive), <https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:32019L2161>
[41] Regolamento (UE) 2022/2065 – Digital Services Act, <https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:32022R2065>
[42] On which the Italian Competition and Market Authority (AGCM) has jurisdiction. In parallel, independent authorities play an increasingly relevant role in combating online disinformation practices. In particular, the Communications Regulatory Authority (AGCOM) exercises supervisory functions in matters of informational pluralism and regulation of digital platforms, also intervening with guidelines and monitoring tools relating to the dissemination of disinformation content in the digital space.
[43] Cass. Pen., Sez. V, 8 giugno 2015, n. 24431.
[44] Senate Act no. 1484-B definitively approved on 4 March 2026 and awaiting publication, <https://www.senato.it/service/PDF/PDFServer/BGT/01496644.pdf> <https://www.senato.it/export/ddl/full/59139>
[45] European Commission – statements by Vice-President Věra Jourová on disinformation,<https://digital-strategy.ec.europa.eu/it/news/vice-president-jourova-mission-canada-discuss-issues-related-disinformation-and-foreign>
[46] Regulation (EU) 2022/2065 of the European Parliament and of the Council – Digital Services Act, <https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:32022R2065>
[47] Regulation (EU) 2022/1925 of the European Parliament and of the Council – Digital Markets Act, <https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:32022R1925>
[48] Directive (EU) 2019/2161 of the European Parliament and of the Council – Omnibus Directive, <https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:32019L2161>
[49] European Parliament – Artificial Intelligence Act (AI Act), <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence>
[50] Council of Europe, Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making (Wardle, Derakhshan, 2017), <https://rm.coe.int/information-disorder-report/1680762772>
[51] Ferrara E. e al., The Rise of Social Bots, Communications of the ACM, 2016, <https://cacm.acm.org/research/the-rise-of-social-bots/>
[52] Council of Europe, Budapest Convention on Cybercrime, <https://www.coe.int/en/web/cybercrime/the-budapest-convention>
[53] Regulation (EU) 2022/2065 – Digital Services Act, <https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:32022R2065>
[54] European Commission, Code of Practice on Disinformation (2022), <https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation>
[55] UNESCO, Media and Information Literacy: Policy and Strategy Guidelines, <https://www.unesco.org/en/media-information-literacy>
[56] European Parliamentary Research Service, A governance framework for algorithmic accountability and transparency, <https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2019)624262>
[57] S. Wells, The Mandela effect tricks our brains with false memories. Is AI making it worse?, National Geographic, 04.08.2025, <https://www.nationalgeographic.it/tutto-cio-che-sappiamo-sull-effetto-mandela-dai-falsi-ricordi-all-impatto-dell-intelligenza-artificiale>



